FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Reform the US low-income broadband program by rebuilding Lifeline

By Annie Siebert
Rick Boucher Contributor
Rick Boucher was a Democratic member of the U.S. House for 28 years and chaired the House Energy and Commerce Committee's Subcommittee on Communications and the Internet. He is the honorary chairman of the Internet Innovation Alliance.

“If you build it, they will come” is a mantra that’s been repeated for more than three decades to embolden action. The line from “Field of Dreams” is a powerful saying, but I might add one word: “If you build it well, they will come.”

America’s Lifeline program, a monthly subsidy designed to help low-income families afford critical communications services, was created with the best intentions. The original goal was to achieve universal telephone service, but it has fallen far short of achieving its potential as the Federal Communications Commission has attempted to convert it to a broadband-centric program.

The FCC’s Universal Service Administrative Company estimates that only 26% of the families that are eligible for Lifeline currently participate in the program. That means that nearly three out of four low-income consumers are missing out on a benefit for which they qualify. But that doesn’t mean the program should be abandoned, as the Biden administration’s newly released infrastructure plan suggests.

Now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users.

Rather, now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users. Instead, the White House fact sheet on the plan recommends price controls for internet access services with a phaseout of subsidies for low-income subscribers. That is a flawed policy prescription.

If maintaining America’s global competitiveness, building broadband infrastructure in high-cost rural areas, and maintaining the nation’s rapid deployment of 5G wireless services are national goals, the government should not set prices for internet access.

Forcing artificially low prices in the quest for broadband affordability would leave internet service providers with insufficient revenues to continue to meet the nation’s communications infrastructure needs with robust innovation and investment.

Instead, targeted changes to the Lifeline program could dramatically increase its participation rate, helping to realize the goal of connecting Americans most in need with the phone and broadband services that in today’s world have become essential to employment, education, healthcare and access to government resources.

To start, Lifeline program participation should be made much easier. Today, individuals seeking the benefit must go through a process of self-enrollment. Implementing “coordinated enrollment” — through which individuals would automatically be enrolled in Lifeline when they qualify for certain other government assistance benefits, including SNAP (the Supplemental Nutrition Assistance Program, formerly known as food stamps) and Medicaid — would help to address the severe program underutilization.

Because multiple government programs serve the same constituency, a single qualification process for enrollment in all applicable programs would generate government efficiencies and reach Americans who are missing out.

Speaking before the American Enterprise Institute back in 2014, former FCC Commissioner Mignon Clyburn said, “In most states, to enroll in federal benefit programs administered by state agencies, consumers already must gather their income-related documentation, and for some programs, go through a face-to-face interview. Allowing customers to enroll in Lifeline at the same time as they apply for other government benefits would provide a better experience for consumers and streamline our efforts.”

Second, the use of the Lifeline benefit can be made far simpler for consumers if the subsidy is provided directly to them via an electronic Lifeline benefit card account — like the SNAP program’s electronic benefit transfer (EBT) card. Not only would a Lifeline benefit card make participation in the program more convenient, but low-income

Americans would then be able to shop among the various providers and select the carrier and the precise service(s) that best suits their needs. The flexibility of greater consumer choice would be an encouragement for more program sign-ups.

And, the current Lifeline subsidy amount — $9.25 per month — isn’t enough to pay for a broadband subscription. For the subsidy to be truly meaningful, an increase in the monthly benefit is needed. Last December, Congress passed the temporary Emergency Broadband Benefit to provide low-income Americans up to a $50 per month discount ($75 per month on tribal lands) to offset the cost of broadband connectivity during the pandemic. After the emergency benefit runs out, a monthly benefit adequate to defray the cost of a broadband subscription will be needed.

In order to support more than a $9.25 monthly benefit, the funding source for the Lifeline program must also be reimagined. Currently, the program relies on the FCC’s Universal Service Fund, which is financed through a “tax” on traditional long-distance and international telephone services.

As greater use is made of the web for voice communications, coupled with less use of traditional telephones, the tax rate has increased to compensate for the shrinking revenues associated with landline phone services. A decade ago, the tax, known as the “contribution factor,” was 15.5%, but it’s now more than double that at an unsustainable 33.4%. Without changes, the problem will only worsen.

It’s easy to see that the financing of a broadband benefit should no longer be tied to a dying technology. Instead, funding for the Lifeline program could come from a “tax” shared across the entire internet ecosystem, including the edge providers that depend on broadband to reach their customers, or from direct congressional appropriations for the Lifeline program.

These reforms are realistic and straightforward. Rather than burn the program down, it’s time to rebuild Lifeline to ensure that it fulfills its original intention and reaches America’s neediest.

Facebook’s decision-review body to take ‘weeks’ longer over Trump ban call

By Natasha Lomas

Facebook’s self-styled and handpicked ‘Oversight Board’ will make a decision on whether or not to overturn an indefinite suspension of the account of former president Donald Trump within “weeks”, it said in a brief update statement on the matter today.

The high profile case appears to have attracted major public interest, with the FOB tweeting that it’s received more than 9,000 responses so far to its earlier request for public feedback.

It added that its commitment to “carefully reviewing all comments” after an earlier extension of the deadline for feedback is responsible for the extension of the case timeline.

The Board’s statement adds that it will provide more information “soon”.

(2/2): The Board’s commitment to carefully reviewing all comments has extended the case timeline, in line with the Board’s bylaws. We will share more information soon.

— Oversight Board (@OversightBoard) April 16, 2021

Trump’s indefinite suspension from Facebook and Instagram was announced by Facebook founder Mark Zuckerberg on January 7, after the then-president of the U.S. incited his followers to riot at the nation’s Capitol — an insurrection that led to chaotic and violent scenes and a number of deaths as his supporters clashed with police.

However Facebook quickly referred the decision to the FOB for review — opening up the possibility that the ban could be overturned in short order as Facebook has said it will be bound by the case review decisions issued by the Board.

After the FOB accepted the case for review it initially said it would issue a decision within 90 days of January 21 — a deadline that would have fallen next Wednesday.

However it now looks like the high profile, high stakes call on Trump’s social media fate could be pushed into next month.

It’s a familiar development in Facebook-land. Delay has been a long time feature of the tech giant’s crisis PR response in the face of a long history of scandals and bad publicity attached to how it operates its platform. So the tech giant is unlikely to be uncomfortable that the FOB is taking its time to make a call on Trump’s suspension.

After all, devising and configuring the bespoke case review body — as its proprietary parody of genuine civic oversight — is a process that has taken Facebook years already.

In related FOB news this week, Facebook announced that users can now request the board review its decisions not to remove content — expanding the Board’s potential cases to include reviews of ‘keep ups’ (not just content takedowns).

This report was updated with a correction: The FOB previously extended the deadline for case submissions; it has not done so again as we originally stated

Sen. Wyden proposes limits on exportation of American’s personal data

By Devin Coldewey

Senator Ron Wyden (D-OR) has proposed a draft bill that would limit the types of information that could be bought and sold by tech companies abroad, and the countries it could be legally sold in. The legislation is imaginative and not highly specific, but it indicates growing concern at the federal level over the international data trade.

“Shady data brokers shouldn’t get rich selling Americans’ private data to foreign countries that could use it to threaten our national security,” said Sen. Wyden in a statement accompanying the bill. They probably shouldn’t get rich selling Americans’ private data at all, but national security is a good way to grease the wheels.

The Protecting Americans’ Data From Foreign Surveillance Act would be a first step toward categorizing and protecting consumer data as a commodity that’s traded on the global market. Right now there are few if any controls over what data specific to a person — buying habits, movements, political party — can be sold abroad.

This means that, for instance, an American data broker could sell the preferred brands and home addresses of millions of Americans to, say, a Chinese bank doing investment research. Some of this trade is perfectly innocuous, even desirable in order to promote global commerce, but at what point does it become dangerous or exploitative?

There isn’t any official definition of what should and shouldn’t be sold to whom, the way we limit sales of certain intellectual property or weapons. The proposed law would first direct the secretary of Commerce to identify the data we should be protecting and to whom it should be protected against.

The general shape of protected data would be that which “if exported by third parties, could harm U.S. national security.” The countries that would be barred from receiving it would be those with inadequate data protection and export controls, recent intelligence operations against the U.S. or laws that allow the government to compel such information to be handed over to them. Obviously this is aimed at the likes of China and Russia, though ironically the U.S. fits the bill pretty well itself.

There would be exceptions for journalism and First Amendment-protected speech, and for encrypted data — for example storing encrypted messages on servers in one of the targeted countries. The law would also create penalties for executives “who knew or should have known” that their company was illegally exporting data, and creates pathways for people harmed or detained in a foreign country owing to illegally exported data. That might be if, say, another country used an American facial recognition service to spot, stop and arrest someone before they left.

If this all sounds a little woolly, it is — but that’s more or less on purpose. It is not for Congress to invent such definitions as are necessary for a law like this one; that duty falls to expert agencies, which must conduct studies and produce reports that Congress can refer to. This law represents the first handful of steps along those lines: getting the general shape of things straight and giving fair warning that certain classes of undesirable data commerce will soon be illegal — with an emphasis on executive responsibility, something that should make tech companies take notice.

The legislation would need to be sensitive to existing arrangements by which companies spread out data storage and processing for various economic and legal reasons. Free movement of data is to a certain extent necessary for globe-spanning businesses that must interact with one another constantly, and to hobble those established processes with red tape or fees might be disastrous to certain locales or businesses. Presumably this would all come up during the studies, but it serves to demonstrate that this is a very complex, not to say delicate, digital ecosystem the law would attempt to modify.

We’re in the early stages of this type of regulation, and this bill is just getting started in the legislative process, so expect a few months at the very least before we hear anything more on this one.

Twitter bans James O’Keefe of Project Veritas over fake account policy

By Devin Coldewey

Twitter has banned right-wing provocateur James O’Keefe, creator of political gotcha video producer Project Veritas, for violating its “platform manipulation and spam policy,” suggesting he was operating multiple accounts in an unsanctioned way. O’Keefe has already announced that he will sue the company for defamation.

The ban, or “permanent suspension” as Twitter calls it, occurred Thursday afternoon. A Twitter representative the action followed the violation of rules prohibiting “operating fake accounts” and attempting to “artificially amplify or disrupt conversations through the use of multiple accounts,” as noted here.

This suggests O’Keefe was banned for operating multiple accounts, outside the laissez-faire policy that lets people have a professional and a personal account, and that sort of thing.

But sharp-eyed users noticed that O’Keefe’s last tweet unironically accused reporter Jesse Hicks of impersonation, including an image showing an unredacted phone number supposedly belonging to Hicks. This too may have run afoul of Twitter’s rules about posting personal information, but Twitter declined to comment on this when I asked.

Supporters of O’Keefe say that the company removed his account as retribution for his most recent “exposé” involving surreptitious recordings of a CNN employee admitting the news organization has a political bias. (The person he was talking to had, impersonating an nurse, matched with him on Tinder.)

For his part O’Keefe said he would be suing Twitter for defamation over the allegation that he operated fake accounts. I’ve contacted Project Veritas for more information.

MEPs call for European AI rules to ban biometric surveillance in public

By Natasha Lomas

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

How startups can ensure CCPA and GDPR compliance in 2021

By Annie Siebert
Beth Winters Contributor
Beth Winters, JD/MBA, is the solutions marketing manager of Aparavi, a data intelligence and automation software and services company that helps companies find and unlock the value of data.

Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.

Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.

For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.

If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.

It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.

Challenges of data compliance for startups

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.

GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.

That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including:

  • Fewer resources and smaller teams — This means there are no designated data privacy officers, privacy attorneys or legal counsel dedicated to data privacy issues.
  • Lack of planning — This might be characterized by being unable to handle data privacy information requests (DSARs, or “data subject access requests”) to help fulfill the customer’s data rights or not having an overall program in place to deal with major data breaches, forcing a reactive instead of a proactive response, which can be time-consuming, slow and expensive.

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

By Natasha Lomas

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

Facebook, Instagram users can now ask ‘oversight’ panel to review decisions not to remove content

By Natasha Lomas

Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.

The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).

“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.

“Our decisions will be binding on Facebook,” it adds.

The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.

While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.

That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.

That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.

The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.

For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.

So, in other words, it needs to be a public body, functioning in the public interest.

Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.

Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.

Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.

The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )

How to ask for a review after content isn’t removed

To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.

If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.

There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.

They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).

Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.

Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.

A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB

“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in. 

Daily Crunch: Amazon beats back union push

By Anthony Ha

Efforts to unionize an Amazon warehouse in Alabama appear to have failed, Facebook takes down fake review groups and a monkey plays Pong with its brain. This is your Daily Crunch for April 9, 2021.

The big story: Amazon beats back union push

Union organizers lost a much-publicized election at Amazon’s Bessemer, Alabama warehouse, with more than half of the 3,215 ballots cast ultimately voting against joining the Retail, Wholesale and Department Store Union.

“It’s easy to predict the union will say that Amazon won this election because we intimidated employees, but that’s not true,” the company said in a blog post. “Our employees heard far more anti-Amazon messages from the union, policymakers, and media outlets than they heard from us. And Amazon didn’t win—our employees made the choice to vote against joining a union.”

However, RWDSU President Stuart Appelbaum suggested that there will “very likely” be a rerun election, and his organization is demanding “a comprehensive investigation over Amazon’s behavior in corrupting this election.”

The tech giants

Facebook takes down 16,000 groups trading fake reviews after another poke by UK’s CMA — The CMA has been leaning on tech giants to prevent their platforms from being used as marketplaces for selling fake reviews.

Startups, funding and venture capital

Watch a monkey equipped with Elon Musk’s Neuralink device play Pong with its brain — A macaque named Pager was eventually able to control the in-game action entirely with its mind via the Link hardware and embedded neural threads.

Mortgage is suddenly sexy as SoftBank pumps $500M in Better.com at a $6B valuation — The COVID-19 pandemic and historically low mortgage rates fueled an acceleration in online lending.

SnackMagic picks up $15M to expand from build-your-own snack boxes into a wider gifting marketplace — The company hit a $20 million revenue run rate in eight months and turned profitable in December.

Advice and analysis from Extra Crunch

So you want to raise a Series A — Kleiner Perkins’ Bucky Moore shares sector-agnostic advice.

How we dodged risks and raised millions for our open-source machine language startup — Jorge Torres and Adam Carrigan of MindDB tell their funding story.

Building the right team for a billion-dollar startup — From building out Facebook’s first office in Austin to putting together most of Quora’s team, Bain Capital Ventures managing director Sarah Smith has done a bit of everything when it comes to hiring.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

The 2022 Chevrolet Bolt EUV lowers the cost of entry for some of GM’s most advanced tech — The optional Super Cruise puts it on course to compete with the Tesla Model Y.

APKPure app contained malicious adware, say researchers — APKPure is a widely popular app for installing older or discontinued Android apps from outside of Google’s app store.

Last call for Detroit startups to apply for TechCrunch’s Detroit City Spotlight pitch-off — The deadline is today, April 9.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Facebook takes down 16,000 groups trading fake reviews after another poke by UK’s CMA

By Natasha Lomas

Facebook has removed 16,000 groups that were trading fake reviews on its platform after another intervention by the UK’s Competition and Markets Authority (CMA), the regulator said today.

The CMA has been leaning on tech giants to prevent their platforms being used as thriving marketplaces for selling fake reviews since it began investigating the issue in 2018 — pressuring both eBay and Facebook to act against fake review sellers back in 2019.

The two companies pledged to do more to tackle the insidious trade last year, after coming under further pressure from the regulator — which found that Facebook-owned Instagram was also a thriving hub of fake review trades.

The latest intervention by the CMA looks considerably more substantial than last year’s action — when Facebook removed a mere 188 groups and disabled 24 user accounts. Although it’s not clear how many accounts the tech giant has banned and/or suspended this time it has removed orders of magnitude more groups. (We’ve asked.)

Facebook was contacted with questions but it did not answer what we asked directly, sending us this statement instead:

“We have engaged extensively with the CMA to address this issue. Fraudulent and deceptive activity is not allowed on our platforms, including offering or trading fake reviews. Our safety and security teams are continually working to help prevent these practices.”

Since the CMA has been raising the issue of fake review trading, Facebook has been repeatedly criticised for not doing enough to clean up its platforms, plural.

Today the regulator said the social media giant has made further changes to the systems it uses for “identifying, removing and preventing the trading of fake and/or misleading reviews on its platforms to ensure it is fulfilling its previous commitments”.

It’s not clear why it’s taken Facebook well over a year — and a number of high profile interventions — to dial up action against the trade in fake reviews. But the company suggested that the resources it has available to tackle the problem had been strained as a result of the COVID-19 pandemic and associated impacts, such as home working. (Facebook’s full year revenue increased in 2020 but so too did its expenses.)

According to the CMA changes Facebook has made to its system for combating traders of fake reviews include:

  • suspending or banning users who are repeatedly creating Facebook groups and Instagram profiles that promote, encourage or facilitate fake and misleading reviews
  • introducing new automated processes that will improve the detection and removal of this content
  • making it harder for people to use Facebook’s search tools to find fake and misleading review groups and profiles on Facebook and Instagram
  • putting in place dedicated processes to make sure that these changes continue to work effectively and stop the problems from reappearing

Again it’s not clear why Facebook would not have already been suspending or banning repeat offenders — at least, not if it was actually taking good faith action to genuinely quash the problem, rather than seeing if it could get away with doing the bare minimum.

Commenting in a statement, Andrea Coscelli, chief executive of the CMA, essentially makes that point, saying: “Facebook has a duty to do all it can to stop the trading of such content on its platforms. After we intervened again, the company made significant changes — but it is disappointing it has taken them over a year to fix these issues.”

“We will continue to keep a close eye on Facebook, including its Instagram business. Should we find it is failing to honour its commitments, we will not hesitate to take further action,” Coscelli added.

A quick search on Facebook’s platform for UK groups trading in fake reviews appears to return fewer obviously dubious results than when we’ve checked in on this problem in 2019 and 2020. Although the results that were returned included a number of private groups so it was not immediately possible to verify what content is being solicited from members.

We did also find a number of Facebook groups offering Amazon reviews intended for other European markets, such as France and Spain (and in one public group aimed at Amazon Spain we found someone offering a “fee” via PayPal for a review; see below screengrab) — suggesting Facebook isn’t applying the same level of attention to tackling fake reviews that are being traded by users in markets where it’s faced fewer regulatory pokes than it has in the UK.

Screengrab: TechCrunch

Twitch expands its rules against hate and abuse to include behavior off the platform

By Taylor Hatmaker

Twitch will start holding its streamers to a higher standard. The company just expanded its hate and harassment policy, specifying more kinds of bad behavior that break its rules and could result in a ban from the streaming service.

The news comes as Twitch continues to grapple with reports of abusive behavior and sexual harassment, both on the platform and within the company itself. In December, Twitch released an updated set of rules designed to take harassment and abuse more seriously, admitting that women, people of color the and LGBTQ community were impacted by a “disproportionate” amount of that toxic behavior on the platform.

Twitch’s policies now include serious offenses that could pose a safety threat, even when they happen entirely away from the streaming service. Those threats include violent extremism, terrorism, threats of mass violence, sexual assault and ties to known hate groups.

The company will also continue to evaluate off-platform behavior in cases that happen on Twitch, like an on-stream situation that leads to harassment on Twitter or Facebook.

“While this policy is new, we have taken action historically against serious, clear misconduct that took place off service, but until now, we didn’t have an approach that scaled,” the company wrote in a blog post, adding that investigating off-platform behavior requires additional resources to address the complexity inherent in those cases.

To handle reports for its broadened rules, Twitch created a dedicated email address (OSIT@twitch.tv) to handle reports about off-service behavior. The company says it has partnered with a third party investigative law firm to vet the reports it receives.

Twitch cites its actions against former President Donald Trump as the most high profile instance of off-platform behavior resulting in enforcement. The company disabled Trump’s account following the attack on the U.S. Capitol and later suspended him indefinitely, citing fears that he could use the service to incite violence.

It’s hard to have a higher profile than the president, but Trump isn’t the only big time banned Twitch user. Last June, Twitch kicked one of its biggest streamers off of the platform without providing an explanation for the decision.

Going on a year later, no one seems to know why Dr. Disrespect got the boot from Twitch, though the company’s insistence that it only acts in cases with a “preponderance of evidence” suggests his violations were serious and well-corroborated.

 

UK’s Digital Markets Unit starts work on pro-competition reforms

By Natasha Lomas

A new UK public body that will be tasked with helping regulate the most powerful companies in the digital sector to ensure competition thrives online and consumers of digital services have more choice and control over their data has launched today.

The Digital Markets Unit (DMU), which was announced in November last year — following a number of market reviews and studies examining concerns about the concentration of digital market power — does not yet have statutory powers itself but the government has said it will consult on the design of the new “pro-competition regime” this year and legislate to put the DMU on a statutory footing as soon as parliamentary time allows.

Concerns about the market power of adtech giants Facebook and Google are key drivers for the regulatory development.

💻 Our new Digital Markets Unit, launched today, will help make sure tech giants can’t exploit their market dominance to crowd out competition and stifle innovation online.

Find out more: https://t.co/PCBCYwuA3o pic.twitter.com/Ybvn81uuBK

— Competition & Markets Authority (@CMAgovUK) April 7, 2021

As a first job, the unit will look at how codes of conduct could work to govern the relationship between digital platforms and third parties such as small businesses which rely on them to advertise or use their services to reach customers — to feed into future digital legislation.

The role of powerful intermediary online gatekeepers is also being targeted by lawmakers in the European Union who proposed legislation at the end of last year which similarly aims to create a regulatory framework that can ensure fair dealing between platform giants and the smaller entities which do business under their terms.

The UK government said today that the DMU will take a sector neutral approach in examining the role of platforms across a range of digital markets, with a view to promoting competition.

The unit has been asked to work with the comms watchdog Ofcom, which the government named last year as its pick for regulating social media platforms under planned legislation due to be introduced this year (aka, the Online Safety Bill as it’s now called).

While that forthcoming legislation is intended to regulate a very wide range of online harms which may affect consumers — from bullying and hate speech to child sexual exploitation and other speech-related issues (raising plenty of controversy, and specific concerns about associated implications for privacy and security) — the focus for the DMU is on business impacts and consumer controls which may also have implications for competition in digital markets.

As part of its first work program, the government said the secretary of state for digital has asked the DMU to work with Ofcom to look specifically at how a code would govern the relationships between platforms and content providers such as news publishers — “including to ensure they are as fair and reasonable as possible”, as its press release puts it.

This suggests the DMU will be taking a considered look at recent legislation passed in Australia — which makes it mandatory for platforms to negotiate with news publishers to pay for reuse of their content.

Earlier this year, the head of the UK’s Competition and Markets Authority (CMA), which the DMU will sit within, told the BBC that Australia’s approach of having a backstop of mandatory arbitration if commercial negotiations between tech giants and publishers fail is a “sensible” approach.

The DMU will also work closely with the CMA’s enforcement division — which currently has a number of open investigations into tech giants, including considering complaints against Apple and Google; and an in-depth probe of Facebook’s Giphy acquisition.

Other UK regulators the government says the DMU will work closely with include the data protection watchdog (the ICO) and the Financial Conduct Authority.

It also said the unit will also coordinate with international partners, given digital competition is an issue that’s naturally globally in nature — adding that it’s already discussing its approach through bilateral engagement and as part of its G7 presidency.

“The Digital Secretary will host a meeting of digital and tech ministers in April as he seeks to build consensus for coordination on better information sharing and joining up regulatory and policy approaches,” it added.

The DMU will be led by Will Hayter, who takes up an interim head post in early May following a stint at the Cabinet Office working on Brexit transition policy. Prior to that he worked for several years at the CMU and also Ofcom, among other roles in regulatory policy.

 

Daily Crunch: Facebook faces questions over data breach

By Anthony Ha

European regulators have questions about a Facebook data breach, Clubhouse adds payments and a robotics company has SPAC plans. This is your Daily Crunch for April 6, 2021.

The big story: Facebook faces questions over data breach

A data breach involving personal data (such as email addresses and phone numbers) of more than 500 million Facebook accounts came to light over the weekend thanks to a story in Business Insider. Although Facebook said the breach was related to a vulnerability that was “found and fixed” in August 2019, the Irish Data Protection Commission — Facebook’s lead data regulator in the European Union — suggested that it’s seeking the “full facts” in the matter.

“The newly published dataset seems to comprise the original 2018 (pre-GDPR) dataset and combined with additional records, which may be from a later period,” said deputy commissioner Graham Doyle in a statement. “A significant number of the users are EU users. Much of the data appears to been data scraped some time ago from Facebook public profiles.”

In addition, it looks like EU regulators may also look into Facebook’s acquisition of customer service company Kustomer.

The tech giants

Apple launches an app for testing devices that work with ‘Find My’ — Find My Certification Asst. is designed for use by Made for iPhone Licensees who need to test their accessories’ interoperability with Apple’s Find My network.

Google Cloud joins the FinOps Foundation — The FinOps Foundation is a relatively new open-source foundation that aims to bring together companies in the “cloud financial management” space to establish best practices and standards.

Facebook confirms ‘test’ of Venmo-like QR codes for person-to-person payments in US — The feature will allow a user to scan a friend’s code with their smartphone’s camera to send or request money.

Startups, funding and venture capital

Clubhouse launches payments so creators can make money — It’s like a virtual tip jar, or a Clubhouse-branded version of Venmo.

Robotic exoskeleton maker Sarcos announces SPAC plans — The deal could potentially value the robotic exoskeleton maker and blank check company at a combined $1.3 billion.

Hipmunk’s founders launch Flight Penguin to bring back Hipmunk-style flight search — I’ve missed Hipmunk.

Advice and analysis from Extra Crunch

Giving EV batteries a second life for sustainability and profit — Automakers and startups are eying ways to reuse batteries before they’re sent for recycling.

Will Topps’ SPAC-led debut expand the bustling NFT market? — Topps and its products are popular with the same set of folks who are very excited about creating rare digital items on particular blockchains.

LG’s exit from the smartphone market comes as no surprise — Why didn’t it happen sooner?

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

GM to build an electric Chevrolet Silverado pickup truck with more than 400 miles of range — GM is positioning the full-sized pickup for both consumer and commercial markets.

Putting Belfast on the TechCrunch map — TechCrunch’s European Cities Survey 2021 — This is the follow-up to the huge survey of investors we’ve done over the last six or more months, largely in capital cities.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Daily Crunch: The Supreme Court sides with Google in Oracle suit

By Anthony Ha

The Supreme Court announces several tech-related rulings, LG will shut down its smartphone business and we take a deep dive into the story of StockX. This is your Daily Crunch for April 5, 2021.

The big story: The Supreme Court sides with Google in Oracle suit

The U.S. Supreme Court announced a couple of tech-related rulings today. In one, it overturned Oracle’s victory in its copyright battle with Google, which would have otherwise required Google to pay Oracle $8 billion for incorporating pieces of Oracle’s Java software language into the Android mobile operating system.

“In reviewing that decision, we assume, for argument’s sake, that the material was copyrightable,” wrote Justice Stephen Breyer. “But we hold that the copying here at issue nonetheless constituted a fair use. Hence, Google’s copying did not violate the copyright law.”

In addition, the court vacated a ruling declaring that then-President Donald Trump had violated the First Amendment by clocking critics on Twitter. In his opinion on the case, Justice Clarence Thomas argued that companies like Facebook and Google are “at bottom communications networks, and they ‘carry’ information from one user to another” and can therefore be regulated in the same way as telecom carriers.

The tech giants

LG is shutting down its smartphone business worldwide — LG said it will focus its resources in “growth areas” such as electric vehicle components.

Labor relations board sides with Amazon employees over firing — Before being fired last year, Emily Cunningham and Maren Costa had been among the company’s most outspoken critics on staff.

Spotify opens a second personalized playlist to sponsors, after Discover Weekly in 2019 — On Repeat is now open to advertising sponsorships.

Startups, funding and venture capital

India’s Swiggy nears $5B valuation in new $800M fundraise — Swiggy is preparing to expand its business after cutting its workforce to navigate the pandemic.

Knotel co-founder leaves company, describes investor Newmark as ‘a stalking horse’ — The startup filed for bankruptcy earlier this year, its assets acquired by investor and commercial real estate brokerage Newmark.

Byju’s acquires Indian tutor Aakash for nearly $1B — Akash is a 33-year-old chain of physical coaching centers.

Advice and analysis from Extra Crunch

The StockX EC-1 — Now valued at $2.8 billion, StockX has facilitated over 10 million transactions.

Chinese startups rush to bring alternative protein to people’s plates — 2020 could well have been the dawn of alternative protein in China.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

What happens to your NFTs and crypto assets after you die? — A new study finds that only one in four consumers have someone in their life who knows all of their passwords and account details.

Fueled by pandemic, contactless mobile payments to surpass half of all smartphone users in US by 2025 — According to a recent report by analyst firm eMarketer, in-store mobile payments usage grew 29% last year in the U.S.

Start your engines, TechCrunch is (virtually) headed to Detroit — Mark April 15 on your calendars!

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Daily Crunch: Facebook makes it easier to view a non-algorithmic News Feed

By Anthony Ha

Facebook has some thoughts and updates about its News Feed, Siri gets some new voices and Tonal becomes a unicorn. This is your Daily Crunch for March 31, 2021.

The big story: Facebook makes it easier to view a non-algorithmic News Feed

Facebook highlighted features today that should make it easier for users to see a version of the News Feed that isn’t shaped by the company’s algorithms. These include a Favorites view that displays posts from up to 30 of your favorite friends and Pages, as well as a Most Recent view, which just shows posts in chronological order. Some of these options existed previously, but they’ll now be easily accessible through a new Feed Filter Bar.

At the same time, the company’s VP of Global Affairs, Nick Clegg, pushed back against criticism of the company’s algorithmic News Feed, saying that personalization is common and useful across the web, though he added, “It would clearly be better if these [content] decisions were made according to frameworks agreed by democratically accountable lawmakers.”

Speaking of content decisions, Facebook also cautioned Donald Trump’s daughter-in-law Lara Trump today for posting an interview with the former president, who has been banned from the social network.

The tech giants

Apple adds two brand new Siri voices and will no longer default to a female or male voice in iOS — This means that every person setting up Siri will choose a voice for themselves.

Instagram officially launches Remix on Reels, a TikTok Duets-like feature — Remix offers a way to record your Reels video alongside a video from another user.

Spotify adds three new types of personalized playlists with launch of ‘Spotify Mixes’ — Your Spotify Mixes will include artist mixes, genre mixes and decade mixes.

Startups, funding and venture capital

Strength-training startup Tonal crosses unicorn status after raising $250M — To date, the at-home fitness tech startup has raised $450 million.

Apple invests $50M into music distributor UnitedMasters alongside a16z and Alphabet —  The focus of UnitedMasters is to provide artists with a direct pipeline to data around the way that fans are interacting with their content and community.

Diversity-focused Harlem Capital raises $134M — Apparently 61% of Harlem Capital’s Fund I portfolio companies are led by Black or Latinx executives, while 43% are led exclusively by women.

Advice and analysis from Extra Crunch

Five machine learning essentials nontechnical leaders need to understand — For engineering and team leaders without an ML background, the incredible pace of change can feel overwhelming and intimidating.

What to make of Deliveroo’s rough IPO debut — After a lackluster IPO pricing run, shares of Deliveroo are lower today, marking a disappointing debut for the hot delivery company.

Embedded procurement will make every company its own marketplace — Merritt Hummer of Bain Capital Ventures argues that with embedded procurement, businesses will buy things they need through vertical B2B apps.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

Report finds going remote made workplaces more hostile for already marginalized groups — The Project Include report is based on a survey of about 2,800 people and interviews with tech workers and subject matter experts in numerous countries and industries.

The Weeknd will sell an unreleased song and visual art via NFT auction — Abel Tesfaye, the Super Bowl-headlining musician known as The Weeknd, is the latest artist to embrace the excitement around NFTs.

Here’s what you don’t want to miss tomorrow at TC Early Stage 2021 — The event will include a wide range of presentations that span the startup ecosystem.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Breaking up Big Tech would be a mistake

By Annie Siebert
T. Alexander Puutio Contributor
T. Alexander Puutio is an adjunct professor at NYU Stern and he dedicates his research at University of Turku to the interplay between AI, tech, international trade and development. All views expressed are his own.

It seems safe to say that our honeymoon with Big Tech is officially over.

After years of questionable data-handling procedures, arbitrary content management policies and outright anti-competitive practices, it is only fair that we take a moment to rethink our relationship with the industry.

Sadly, most of the ideas that have gathered mainstream attention — such as the calls to break up Big Tech — have been knee-jerk responses that smack more of retributionist fantasies than sound economic thinking.

Instead of chasing sensationalist non-starters and zero-sum solutions, we should be focused on ensuring that big tech grows better as it grows bigger by establishing a level playing field for startups’ and competitors’ proprietary digital markets.

We can find inspiration on how to do just that by taking a look at how 20th-century lawmakers reined in the railroad monopolies, which similarly turned from darlings of industry to destructive forces of stagnation.

We’ve been here before

More than a century ago, a familiar story of a nation coming to terms with the unanticipated effects of technological disruption was unfolding across a rapidly industrializing United States.

While the first full-scale steam locomotive debuted in 1804, it took until 1868 for more powerful and cargo-friendly American-style locomotives to be introduced.

The more efficient and cargo-friendly locomotives caught on like wildfire, and soon steel and iron pierced through mountains and leaped over gushing rivers to connect Americans from coast to coast.

Soon, railroad mileage tripled and a whopping 77% of all intercity traffic and 98% of passenger business would be running on rails, ushering in an era of cost-efficient transcontinental travel that would recast the economic fortunes of the entire country.

As is often the case with disruptive technologies, early success would come with a heavy human cost.

From the very beginning, abuse and exploitation ran rampant in the railroad industry, with up to 3% of the labor force suffering injuries or dying during the course of an average year.

Railroad trust owners soon became key constituents of the widely maligned group of businessmen colloquially known as robber barons, whose corporations devoured everything in their path and made life difficult for competitors and new entrants in particular.

The railroad proprietors achieved this by maintaining carefully constructed walled gardens, allowing them to run competitors into the ground by means of extortion, exclusion and everything in between.

While these methods proved wildly successful for railroad owners, the rest of society languished under stifled competition and an utter lack of concern for consumers’ interests.

Everything old is new again

Learning from past experiences certainly doesn’t seem to be humankind’s strong suit.

In fact, most of our concerns with the tech industry are mirror images of the objections 20th-century Americans had against the railroad trusts.

Similar to the robber barons, Alphabet, Amazon, Apple, Facebook, Twitter, et al., have come to dominate the major thoroughfares of trade in a fashion that leaves little space for competitors and startups.

By instating double-digit platform fees, establishing strict limitations on payment processing protocols, and jealously hoarding proprietary data and APIs, Big Tech has erected artificial barriers to entry that make replicating their success all but impossible.

Over the past years, tech giants have also taken to cannibalizing third-party solutions by providing private-label versions — à la AmazonBasics — to the point where Big Tech’s clients are finding themselves undercut and outplayed by the platform-holders themselves.

Given the above, it is not surprising that the pace at which tech startups are created in the US has been declining for years.

In fact, VC veterans such as Albert Wenger have called attention to the “kill zone” around Big Tech for years, and if we are to reinvigorate the competitive fringe around our large tech conglomerates, something has to be done fast.

Why we need to stop talking about breaking up Big Tech

The 20th-century playbook for taming monopolistic railroad trusts offers several helpful lessons for dealing with Big Tech.

For first steps, Congress created the Interstate Commerce Commission (ICC) in 1887 and tasked it with administering reasonable and just rates for access to proprietary railroad networks.

Due to partisan politicking, the ICC proved relatively toothless, however. It wasn’t until Congress passed the 1906 Hepburn Act, which separated the function of transportation from the ownership of the goods being shipped, that we started seeing true progress.

By disallowing self-dealing and double-dipping in proprietary platforms, Congress succeeded in opening up access on equal terms both to existing competitors and startups alike, making a once-unnavigable thicket of exploitative practices into the metallic backbone of American prosperity that we know today.

This could never have been achieved by simply breaking the railroad trusts into smaller pieces.

In fact, when it comes to platforms and networks, bigger often is better for everyone involved thanks to network effects and several other factors that conspire against smaller platforms.

Most importantly, when access and interoperability rules are done right, bigger platforms can sustain wider and wider constellations of startups and third parties, helping us grow our economic pie instead of shrinking it.

Making digital markets work for startups

In our post-pandemic economy, our attention should be in helping tech platforms grow better as they grow bigger instead of cutting them down to size.

Ensuring that startups and competitors can access these platforms on equitable terms and at fair prices is a necessary first step.

There are numerous other tangible actions policymakers can take today. For example, rewriting the rules on data portability, pushing for wider standardization and interoperability across platforms, and reintroducing net neutrality would go a long way in addressing what ails the industry today.

With President Joe Biden’s recent nod toward “Amazon’s Antitrust Antagonist” Lina Khan as the next commissioner of the Federal Trade Commission, these changes suddenly seem more likely than ever.

In the end, all of us would stand to benefit from a robust fringe of startups and competitors that thrive on the shoulders of giants and the platforms they have made.

Daily Crunch: Google starts testing its cookie alternative

By Anthony Ha

Google tries out new ad targeting technology, PayPal adds cryptocurrency support and Substack raises additional funding. This is your Daily Crunch for March 30, 2021.

The big story: Google starts testing its cookie alternative

Google announced today that it has begun rolling out a new technology called Federated Learning of Cohorts (FLoC) in a developer trial. FLoC is meant to serve as an alternative to personally identifiable cookies (which are being phased out by Google and other platforms), with Google analyzing your web browsing behavior and grouping you with other people who have similar interests, for ad-targeting purposes.

The trial is starting out in a number of geographies, including the United States — but not in Europe, where there are concerns about compliance with Europe’s GDPR privacy regulations.

The tech giants

YouTube tests hiding dislike counts on videos — The company says it will run a “small experiment” with different designs that hide dislike counts, but not the “dislike” button itself.

Ballot counting for Amazon’s historic union vote starts today — Amazon’s warehouse in Bessemer, Alabama has become ground zero for one of the most import labor efforts in modern American history.

PayPal’s new feature allows US consumers to check out using cryptocurrency — The feature expands on PayPal’s current investments in the cryptocurrency market.

Startups, funding and venture capital

Celebrity video request site Cameo reaches unicorn status with $100M raise — Cameo has been building a good deal of steam in recent years, but it also got a major boost amidst the pandemic.

Substack confirms $65M raise, promises to ‘rapidly’ expand its financial backing of newly independent writers — Substack did not provide material new growth metrics, instead saying that it has “more than half a million people” paying for writers on its network.

NFT art marketplace SuperRare closes $9M Series A — SuperRare launched its art platform in 2018, since then it has differentiated by maintaining a closed early access platform, closely curating the art that’s sold.

Advice and analysis from Extra Crunch

The Tonal EC-1 — Remember our deep dives into the history, businesses and growth of Patreon, Niantic, Roblox, Kobalt and Unity? We’re bringing the format back with an in-depth, multi-part look at fitness startup Tonal.

Is Substack really worth $650M? — More thoughts on Substack’s finances.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

A trove of imported console games vanish from Chinese online stores — A handful of grey market videogame console vendors on Taobao stopped selling and shipping this week.

Applications for Startup Battlefield at TC Disrupt 2021 are now open — TechCrunch is on the hunt for game-changing and ground-breaking startups from around the globe to feature in Startup Battlefield during TechCrunch Disrupt 2021 this fall.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Groups Call for Ethical Guidelines on Location-Tracking Tech

By Sidney Fussell
The Locus Charter asks companies to commit to 10 principles, including minimizing data collection and actively seeking consent from users.

Daily Crunch: Zuckerberg defends Facebook over role in Capitol attack

By Anthony Ha

Tech executives face Congress, Spotify gets a redesign and Snapchat is developing a new Remix feature. This is your Daily Crunch for March 25, 2021.

The big story: Zuckerberg defends Facebook over role in Capitol attack

Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey and Google’s Sundar Pichai appeared at a hearing today with the House Energy and Commerce Committee on the theme of misinformation, particularly the role that their platforms may have played in the Capitol attack by allowing lies and extremism to spread.

In his opening statement, Zuckerberg advocated for reforms to Section 230 and said that Facebook “did our part” to protect last year’s presidential election, putting the blame for the Capitol riots squarely on former President Donald Trump.

Pressed by Rep. Mike Doyle (D-PA) on whether Facebook bears some responsibility, Zuckerberg replied, “I think the responsibility lies with the people who took the actions to break the law and do the insurrection. Secondarily, also with the people who spread that content, including the president but others as well, with repeated rhetoric over time, saying that the election was rigged and encouraging people to organize, I think that those people bear the primary responsibility as well.”

The tech giants

Spotify rolls out redesigned desktop and web apps — Overall, the update gives the Spotify app a more streamlined, less cluttered look and feel.

Snapchat is developing its own take on TikTok Duets with a new ‘Remix’ feature — This feature will allow users to create new content using their friends’ Snaps.

Startups, funding and venture capital

PPRO extends latest round to $270M, adding JPMorgan and Eldridge to grow its localized payments platform — PPRO’s core product is a set of APIs that e-commerce companies can integrate into their check-outs to accept payments in whatever local methods and currencies consumers prefer.

Notarize raises $130M, tripling valuation on the back of 600% YoY revenue growth — When the world shifted toward virtual a year ago, one service in particular saw heated demand: remote online notarization.

Everlywell acquires two healthcare companies and forms parent Everly Health — The new parent entity will now offer services including at-home lab testing kits and education, population-scale testing through a U.S.-wide clinician network, telehealth and a payer-supported/enterprise self-collected lab test.

Advice and analysis from Extra Crunch

Automakers, suppliers and startups see growing market for in-vehicle AR/VR applications — A new battle for market share is emerging inside vehicles.

How VC and private equity funds can launch portfolio-acceleration platforms — Almost every private equity and venture capital investor now advertises that they have a platform to support their portfolio companies.

Will fading YOLO sentiment impact Robinhood, Coinbase and other trading platforms? — What happens to hot fintech startups that have benefited from a rise in consumer trading activity if regular folks lose interest in financial wagers?

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

FatFace tells customers to keep its data breach ‘strictly private’ — Clothing giant FatFace had a data breach, but it doesn’t want you to tell anyone about it.

EV makers oppose delay to automotive emissions penalty increase — Electric vehicle manufacturers are pushing back against a decision to delay penalty increases for automakers who fail to meet fuel efficiency standards.

New York moves to legalize recreational marijuana — New York State officials struck a deal with Gov. Andrew Cuomo to allow recreational use of cannabis.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

US privacy, consumer, competition and civil rights groups urge ban on ‘surveillance advertising’

By Natasha Lomas

Ahead of another big tech vs Congress ‘grab your popcorn’ grilling session, scheduled for March 25 — when US lawmakers will once again question the CEOs of Facebook, Google and Twitter on the unlovely topic of misinformation — a coalition of organizations across the privacy, antitrust, consumer protection and civil rights spaces has called for a ban on “surveillance advertising”, further amplifying the argument that “big tech’s toxic business model is undermining democracy”.

The close to 40-strong coalition behind this latest call to ban ‘creepy ads’ which rely on the mass tracking and profiling of web users in order to target them with behavioral ads includes the American Economic Liberties Project, the Campaign for a Commercial Free Childhood, the Center for Digital Democracy, the Center for Humane Technology, Epic.org, Fair Vote, Media Matters for America, the Tech Transparency Project and The Real Facebook Oversight Board, to name a few.

As leaders across a broad range of issues and industries, we are united in our concern for the safety of our communities and the health of democracy,” they write in the open letter. “Social media giants are eroding our consensus reality and threatening public safety in service of a toxic, extractive business model. That’s why we’re joining forces in an effort to ban surveillance advertising.”

The coalition is keen to point out that less toxic non-tracking alternatives (like contextual ads) exist, while arguing that greater transparency and oversight of adtech infrastructure could help clean up a range of linked problems, from junk content and rising conspiracism to ad fraud and denuded digital innovation.

“There is no silver bullet to remedy this crisis – and the members of this coalition will continue to pursue a range of different policy approaches, from comprehensive privacy legislation to reforming our antitrust laws and liability standards,” they write. “But here’s one thing we all agree on: It’s time to ban surveillance advertising.”

“Big Tech platforms amplify hate, illegal activities, and conspiracism — and feed users increasingly extreme content — because that’s what generates the most engagement and profit,” they warn.

“Their own algorithmic tools have boosted everything from white supremacist groups and Holocaust denialism to COVID-19 hoaxes, counterfeit opioids and fake cancer cures. Echo chambers, radicalization, and viral lies are features of these platforms, not bugs — central to the business model.”

The coalition also warns over surveillance advertising’s impact on the traditional news business, noting that shrinking revenues for professional journalism is raining more harm down upon the (genuine) information ecosystem democracies need to thrive.

The potshots are well rehearsed at this point although it’s an oversimplification to blame the demise of traditional news on tech giants so much as ‘giant tech’: aka the industrial disruption wrought by the Internet making so much information freely available. But dominance of the programmatic adtech pipeline by a couple of platform giants clearly doesn’t help. (Australia’s recent legislative answer to this problem is still too new to assess for impacts but there’s a risk its news media bargaining code will merely benefit big media and big tech while doing nothing about the harms of either industry profiting off of outrage.)

“Facebook and Google’s monopoly power and data harvesting practices have given them an unfair advantage, allowing them to dominate the digital advertising market, siphoning up revenue that once kept local newspapers afloat. So while Big Tech CEOs get richer, journalists get laid off,” the coalition warns, adding: “Big Tech will continue to stoke discrimination, division, and delusion — even if it fuels targeted violence or lays the groundwork for an insurrection — so long as it’s in their financial interest.”

Among a laundry list of harms the coalition is linking to the dominant ad-based online business models of tech giants Facebook and Google is the funding of what they describe as “insidious misinformation sites that promote medical hoaxes, conspiracy theories, extremist content, and foreign propaganda”.

“Banning surveillance advertising would restore transparency and accountability to digital ad placements, and substantially defund junk sites that serve as critical infrastructure in the disinformation pipeline,” they argue, adding: “These sites produce an endless drumbeat of made-to-go-viral conspiracy theories that are then boosted by bad-faith social media influencers and the platforms’ engagement-hungry algorithms — a toxic feedback loop fueled and financed by surveillance advertising.”

Other harms they point to are the risks posed to public health by platforms’ amplification of junk/bogus content such as COVID-19 conspiracy theories and vaccine misinformation; the risk of discrimination through unfairly selective and/or biased ad targeting, such as job ads that illegally exclude women or ethnic minorities; and the perverse economic incentives for ad platforms to amplify extremist/outrageous content in order to boost user engagement with content and ads, thereby fuelling societal division and driving partisanship as a byproduct of the fact platforms benefit financially from more content being spread.

The coalition also argues that the surveillance advertising system is “rigging the game against small businesses” because it embeds platform monopolies — which is a neat counterpoint to tech giants’ defensive claim that creepy ads somehow level the playing field for SMEs vs larger brands.

“While Facebook and Google portray themselves as lifelines for small businesses, the truth is they’re simply charging monopoly rents for access to the digital economy,” they write, arguing that the duopoly’s “surveillance-driven stranglehold over the ad market leaves the little guys with no leverage or choice” — opening them up to exploitation by big tech.

The current market structure — with Facebook and Google controlling close to 60% of the US ad market — is thus stifling innovation and competition, they further assert.

“Instead of being a boon for online publishers, surveillance advertising disproportionately benefits Big Tech platforms,” they go on, noting that Facebook made $84.2BN in 2020 ad revenue and Google made $134.8BN off advertising “while the surveillance ad industry ran rife with allegations of fraud”.

The campaign being kicked off is by no means the first call for a ban on behavioral advertising but given how many signatories are backing this one it’s a sign of the scale of the momentum building against a data-harvesting business model that has shaped the modern era and allowed a couple of startups to metamorphosize into society- and democracy-denting giants.

That looks important as US lawmakers are now paying close attention to big tech impacts — and have a number of big tech antitrust cases actively on the table. Although it was European privacy regulators that were among the first to sound the alarm over microtargeting’s abusive impacts and risks for democratic societies.

Back in 2018, in the wake of the Facebook data misuse and voter targeting scandal involving Cambridge Analytica, the UK’s ICO called for an ethical pause on the use of online ad tools for political campaigning — penning a report entitled Democracy Disrupted? Personal information and political influence.

It’s no small irony that the self-same regulator has so far declined to take any action against the adtech industry’s unlawful use of people’s data — despite warning in 2019 that behavioral advertising is out of control.

The ICO’s ongoing inaction seems likely to have fed into the UK government’s decision that a dedicated unit is required to oversee big tech.

In recent years the UK has singled out the online ad space for antitrust concern — saying it will establish a pro-competition regulator to tackle big tech’s dominance, following a market study of the digital advertising sector carried out in 2019 by its Competition and Markets Authority which reported substantial concerns over the power of the adtech duopoly.

Last month, meanwhile, the European Union’s lead data protection supervisor urged not a pause but a ban on targeted advertising based on tracking internet users’ digital activity — calling on regional lawmakers’ to incorporate the lever into a major reform of digital services rules which is intended to boost operators’ accountability, among other goals.

The European Commission’s proposal had avoided going so far. But negotiations over the Digital Services Act and Digital Markets Act are ongoing.

Last year the European Parliament also backed a tougher stance on creepy ads. Again, though, the Commission’s framework for tackling online political ads does not suggest anything so radical — with EU lawmakers pushing for greater transparency instead.

It remains to be seen what US lawmakers will do but with US civil society organizations joining forces to amplify an anti-ad-targeting message there’s rising pressure to clean up the toxic adtech in its own backyard.

Commenting in a statement on the coalition’s website, Zephyr Teachout, an associate professor of law at Fordham Law School, said: “Facebook and Google possess enormous monopoly power, combined with the surveillance regimes of authoritarian states and the addiction business model of cigarettes. Congress has broad authority to regulate their business models and should use it to ban them from engaging in surveillance advertising.”

“Surveillance advertising has robbed newspapers, magazines, and independent writers of their livelihoods and commoditized their work — and all we got in return were a couple of abusive monopolists,” added David Heinemeier Hansson, creator of Ruby on Rails, in another supporting statement. “That’s not a good bargain for society. By banning this practice, we will return the unique value of writing, audio, and video to the people who make it rather than those who aggregate it.”

With US policymakers paying increasingly close attention to adtech, it’s interesting to see Google is accelerating its efforts to replace support for individual-level tracking with what it’s branded as a ‘privacy-safe’ alternative (FLoC).

Yet the tech it’s proposed via its Privacy Sandbox will still enable groups (cohorts) of web users to be targeted by advertisers, with ongoing risks for discrimination, the targeting of vulnerable groups of people and societal-scale manipulation — so lawmakers will need to pay close attention to the detail of the ‘Privacy Sandbox’ rather than Google’s branding.

“This is, in a word, bad for privacy,” warned the EFF, writing about the proposal back in 2019. “A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate.”

“FLoC is the opposite of privacy-preserving technology,” it added. “Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google’s future, they will sit back, relax, and let your browser do the work for them.”

❌