FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Yesterday — February 19th 2020Your RSS feeds

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

By Natasha Lomas

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission President Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.

It could also be summed up as a “scramble for AI,” with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.

Pushing for the EU to achieve technological sovereignty is a key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.

Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”

The top-line proposals are:

AI

  • Rules for “high risk” AI systems such as in health, policing, or transport, requiring such systems are “transparent, traceable and guarantee human oversight.”
  • A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination.”
  • Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys.
  • A “broad debate” on the circumstances where use of remote use of biometric identification could be justified.
  • A voluntary labelling scheme for lower risk AI applications.
  • Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc.

Data

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules.” 
  • A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation.
  • Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
  • Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health

The full data strategy proposal can be found here.

While the Commission’s white paper on AI “excellence and trust” is here.

Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.

A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.

Tech for good

At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.

The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.

The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.

The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper

Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”

She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy with additional proposals still to be set out.

“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.

“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”

Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.

“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.

Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.

“More than ever a green transition and digital transition goes hand in hand.”

On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.

“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.

“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”

“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”

Trustworthy artificial intelligence

On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.

The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.

On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.

To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.

If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.

The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.

Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.

If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.

In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.

Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”

“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”

She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.

She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.

“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”

“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”

Towards a rights-respecting common data space

The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.

Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.

Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.

“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.

The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.

The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.

But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.

The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.

Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.

There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.

Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.

The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.

Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)

The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.

But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.

It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.

Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.

While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .

“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”

At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.

Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.

“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.

The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Do you want to know more on the EU’s digital strategy?
Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz

— European Commission 🇪🇺 (@EU_Commission) February 18, 2020

Platform liability

There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.

That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.

During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.

“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”

Internal market commissioner, Thierry Breton

Before yesterdayYour RSS feeds

Indian police open case against hundreds in Kashmir for using VPN

By Manish Singh

Local authorities in India-controlled Kashmir have opened a case against hundreds of people who used virtual private networks (VPNs) to circumvent a social media ban in the disputed Himalayan region in a move that has been denounced by human rights and privacy activists.

Tahir Ashraf, who heads the police cyber division in Srinagar, said on Tuesday that the authority had identified and was probing hundreds of suspected users who he alleged misused social media to promote “unlawful activities and secessionist ideology.”

On Monday, the police said they had also seized “a lot of incriminating material” under the Unlawful Activities Prevention Act (UAPA), the nation’s principal counter-terrorism law. Those found guilty could be jailed up to seven years.

“Taking a serious note of misuse of social media, there have been continuous reports of misuse of social media sites by the miscreants to propagate the secessionist ideology and to promote unlawful activities,” the region’s police said in a statement.

The move comes weeks after the Indian government restored access to several hundred websites, including some shopping websites such as Amazon India and Flipkart and select news outlets. Facebook, Twitter and other social media services remain blocked, and mobile data speeds remain capped at 2G speeds.

One analysis found that 126 of 301 websites that had been unblocked were only usable to “some degree.” To bypass the censorship on social media and access news websites, many in the disputed region, home to more than 7 million people, began using VPN services.

India banned internet access in Jammu and Kashmir in early August last year after New Delhi revoked Kashmir’s semi-autonomous status. The Indian government said the move was justified to maintain calm in the region — months later India’s apex court criticized the government for imposing a blanket internet ban for an indefinite period.

“The Government of India has almost total control over what information is coming out of the region,” said Avinash Kumar, executive director of human rights campaign group Amnesty International India.

“While the Government has a duty and responsibility to maintain law and order in the state, filing cases under counter-terrorism laws such as UAPA over vague and generic allegations and blocking social media sites – is not the solution. The Indian government needs to put humanity first and let the people of Kashmir speak,” he urged the government.

Mishi Choudhary, executive director of New Delhi-based Software Law and Freedom Centre, said that the authority did not need to chase people who are using VPNs, and should restore internet access like any other democratic society.

“Any alleged rumors can be addressed by putting out accurate and more information through the same social media platforms. Content-based restrictions on speech can only be allowed within the restrictions established by the Constitution and not in an ad hoc manner,” she said.

Vodafone Idea shares tumble 23% after India orders it to pay billions in dues

By Manish Singh

Shares of Vodafone Idea fell by more than 23% on Friday after India’s apex court ordered the country’s second-largest telecom operator and Airtel, the third-largest telecom network, to arrange and pay billions of dollars in dues in a month.

In a strongly worded judgement, the Supreme Court rejected telecom networks’ application to defer paying historic $13 billion levies to the government. “This is pure contempt, 100% contempt,” Justice Arun Mishra told lawyers.

The order today, which may result in U.K. telecom giant Vodafone’s local joint venture’s collapse, saw Vodafone Idea’s shares plunge by 23.21%. Vodafone Idea had more than 336 million subscribers as of November last year, according to official figures (PDF).

The company did not respond to a request for comment.

The Supreme Court’s order was followed by direction from the Department of Telecoms to pay the dues by the end of Friday. The local ministry of telecommunications also ordered the telecom companies to keep their relevant offices open on Saturday to “facilitate” payments and answer queries.

In October, the Supreme Court ruled that Vodafone Idea and Bharti Airtel, as well as several other operators, including some that are no longer operational, will have to pay the government within 90 days a combined $13 billion in adjusted gross revenue as spectrum usage charges and license fees.

The Indian government and telecom operators have for a decade disputed how gross revenue should be calculated. The government has mandated the license and spectrum fee to be paid by operators as a share of their revenue. Telcos have argued that only core income accrued from use of spectrum should be considered for calculation of adjusted gross revenue.

Commenting on the ruling, Airtel said that it would pay $1.3 billion by next week and the remainder (about $5 billion) before March 17, when the Supreme Court hears the case again. Its shares rose 4.69% on Friday as the telecom operator is in a better position to pay and the prospects of it being only the second major telecom network to fight Reliance Jio, the top network run by India’s richest man Mukesh Ambani .

In recent months, executives of U.K.-headquartered Vodafone, which owns 45% of Vodafone Idea, have said that the group’s telecom business in India would “shut shop” if the government does not offer it any relief. Vodafone Idea, which is already saddled by $14 billion in net debt, owes about $4 billion in levies to the Indian government.

Vodafone Idea Chairman Kumar Mangalam Birla said in December that the firm is headed toward insolvency in the absence of a relief from the government. “It doesn’t make sense to put good money after bad,” he said then.

The last few years have been difficult for telecom operators in India, which arrived in the nation to secure a slice of the world’s second most populous market. But since 2016, they have lost tens of millions of subscribers after Ambani launched Reliance Jio and offered free data and voice calls for an extended period of time, forcing every other company to slash their tariffs.

Sidharth Luthra, a senior advocate at Supreme Court, said in a televised interview that the court is within its rights to reach such a decision, but said that perhaps they should have considered the economic consequences of the ruling that would impact jobs, and could disrupt the everyday lives of people who rely on a network’s services.

Vodafone Idea is the top trending topic on Twitter as of early Saturday (local time), as numerous people expressed concerns about the future prospects of the telecom network and worried if the service would remain operational for them.

Class action suit against Clearview AI cites Illinois law that cost Facebook $550M

By Devin Coldewey

Just two weeks ago Facebook settled a lawsuit alleging violations of privacy laws in Illinois (for the considerable sum of $550 million). Now controversial startup Clearview AI, which has gleefully admitted to scraping and analyzing the data of millions, is the target of a new lawsuit citing similar violations.

Clearview made waves earlier this year with a business model seemingly predicated on wholesale abuse of public-facing data on Twitter, Facebook, Instagram and so on. If your face is visible to a web scraper or public API, Clearview either has it or wants it and will be submitting it for analysis by facial recognition systems.

Just one problem: That’s illegal in Illinois, and you ignore this to your peril, as Facebook found.

The lawsuit, filed yesterday on behalf of several Illinois citizens and first reported by Buzzfeed News, alleges that Clearview “actively collected, stored and used Plaintiffs’ biometrics — and the biometrics of most of the residents of Illinois — without providing notice, obtaining informed written consent or publishing data retention policies.”

Not only that, but this biometric data has been licensed to many law enforcement agencies, including within Illinois itself.

All this is allegedly in violation of the Biometric Information Privacy Act, a 2008 law that has proven to be remarkably long-sighted and resistant to attempts by industry (including, apparently, by Facebook while it fought its own court battle) to water it down.

The lawsuit (filed in New York, where Clearview is based) is at its very earliest stages and has only been assigned a judge, and summonses sent to Clearview and CDW Government, the intermediary for selling its services to law enforcement. It’s impossible to say how it will play out at this point, but the success of the Facebook suit and the similarity of the two cases (essentially the automatic and undisclosed ingestion of photos by a facial recognition engine) suggest that this one has legs.

The scale is difficult to predict, and likely would depend largely on disclosure by Clearview as to the number and nature of its analysis of photos of those protected by BIPA.

Even if Clearview were to immediately delete all the information it has on citizens of Illinois, it would still likely be liable for its previous acts. A federal judge in Facebook’s case wrote: “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests,” and is therefore actionable. That’s a strong precedent and the similarities are undeniable — not that they won’t be denied.

You can read the text of the complaint here.

Bloomberg memes push Instagram to require sponsorship disclosure

By Josh Constine

Instagram is changing its advertising rules to require political campaigns’ sponsored posts from influencers to use its Branded Content Ads tool that adds a disclosure label of “Paid Partnership With”. The change comes after the Bloomberg presidential campaign paid meme makers to post screenshots that showed him asking them to make him look cool.

Instagram provided this statement to TechCrunch:

“Branded content is different from advertising, but in either case we believe it’s important people know when they’re seeing paid content on our platforms. That’s why we have an Ad Library where anyone can see who paid for an ad and why we require creators to disclose any paid partnerships through our branded content tools. After hearing from multiple campaigns, we agree that there’s a place for branded content in political discussion on our platforms. We’re allowing US-based political candidates to work with creators to run this content, provided the political candidates are authorized and the creators disclose any paid partnerships through our branded content tools.”

Instagram explains to TechCrunch that branded content is different from advertising because Facebook doesn’t receive any payment and it can’t be targeted. If marketers or political campaigns pay to boost the reach of sponsored content, it’s then subject to Instagram’s ad policies and goes in its Ad Library for seven years.

But previously, Instagram banned political operations from running branded content because the policies that applied to it covered all monetization mediums on Instagram, including ad breaks and subscriptions that political entities are blocked from using. Facebook didn’t want to be seen as giving monetary contributions to campaigns, especially as the company tries to appear politically neutral.

Yet now Instagram is changing the rule and not just allowing but requiring political campaigns to use the Branded Content Ads tool when paying influencers to post sponsored content. That’s because Instagram and Facebook don’t get paid for these sponsorships. It’s now asking all sponsorships, including the Bloomberg memes retroactively, to be disclosed with a label using this tool. That would add a “Paid Partnership with Bloomberg 2020” warning to posts and Stories that the campaign paid meme pages and other influencers to post. This rule change is starting in the US today.

Instagram was moved to make the change after Bloomberg DM memes flooded the site. The New York Times’ Taylor Lorenz reported that the Bloomberg campaign worked with Meme 2020, an organization led by the head of the “FuckJerry” account’s Jerry Media company Mick Purzycki, to recruit and pay the influencers. Their posts made it look like Bloomberg himself had Direct Messaged the creators asking them to post stuff that would make him relevant to a younger audience.

Part of the campaign’s initial success came because users weren’t fully sure if the influencers’ posts were jokes or ads, even if they were disclosed with #ad or “yes this is really sponsored by @MikeBloomberg”. There’s already been a swift souring of public perception on the meme campaign, with some users calling it cringey and posting memes of Bernie Sanders, who’s anti-corporate stance pits him opposite of Bloomberg.

The change comes just two days after the FTC voted to review influencer marketing guidelines and decide if advertisers and platforms might be liable for penalties for failing to mandate disclosure.

At least the Democratic field of candidates is finally waking up to the power of memes to reach a demographic largely removed from cable television and rally speeches. The Trump campaign has used digital media to great effect, exploiting a lack of rules against misinformation in Facebook ads to make inaccurate claims and raise money. With all his baked in media exposure from being President already, the Democratic challengers need all the impressions they can get.

Surprise! Audit finds automated license plate reader programs are a privacy nightmare

By Devin Coldewey

Automated license plate readers, ALPRs, would be controversial even if they were responsibly employed by the governments that run them. Unfortunately, and to no one’s surprise, the way they actually operate is “deeply disturbing and confirm[s] our worst fears about the misuse of this data,” according to an audit of the programs instigated by a Californian legislator.

What we’ve learned today is that many law enforcement agencies are violating state law, are retaining personal data for lengthy periods of time, and are disseminating this personal data broadly. This state of affairs is totally unacceptable,” said California State Senator Scott Weiner (D-SF), who called for the audit of these programs. The four agencies audited were the LAPD, Fresno PD, and the Marin and Sacramento County Sheriffs Departments.

The inquiry revealed that the programs can barely justify their existence and not seem to have, let alone follow, best practices for security and privacy:

  • Los Angeles alone stores 320 million license plate images, 99.9 percent of which were not being sought by law enforcement at the time of collection.
  • Those images were shared with “hundreds” of other agencies but there was no record of how this was justified legally or accomplished properly.
  • None of the agencies has a privacy policy in line with requirements established in 2016. Three could not adequately explain access and oversight permissions, or how and when data would or could be destroyed, “and the remaining agency has not developed a policy at all.”
  • There were almost no policies or protections regarding account creation and use and have never audited their own systems.
  • Three of the agencies store their images and data with a cloud vendor, the contract for which had inadequate if any protections for that data.

In other words, “there is significant cause for alarm,” the press release stated. As the programs appear to violate state law they may be prosecuted, and as existing law appears to be inadequate to the task of regulating them, new ones must be proposed, Wiener said, and he is working on it.

The full report can be read here.

Judge temporarily halts work on JEDI contract until court can hear AWS protest

By Ron Miller

A sealed order from a judge today has halted the $10 billion, decade long JEDI project in its tracks until AWS’s protest of the contract award to Microsoft can be heard by the court.

The order signed by Judge Patricia E. Campbell-Smith of the US Court Federal Claims stated:

The United States, by and through the Department of Defense, its officers, agents, and employees, is hereby PRELIMINARILY ENJOINED from proceeding with contract activities under Contract No. HQ0034-20-D-0001, which was awarded under Solicitation No. HQ0034-18-R-0077, until further order of the court.

The judge was not taking this lightly, adding that Amazon would have to put up $42 million bond to cover costs should it prove that the motion was filed wrongfully. Given Amazon’s value as of today is $1.08 trillion, they can probably afford to put up the money, but they must provide it by February 20th, and the court gets to hold the funds until a final determination has been made.

At the end of last month, Amazon filed a motion to stop work on the project until the court could rule on its protest. It is worth noting that in protests of this sort, it is not unusual to stop work until a final decision on the award can be made.

This is all part of an ongoing drama that has gone for a couple of years since the DoD put this out to bid. After much wrangling, the DoD awarded the contract to Microsoft at the end of October. Amazon filed suit in November, claiming that the president had unduly influenced the process.

As we reported in December, at a press conference at AWS re:Invent, the cloud arm’s annual customer conference, AWS CEO Andy Jassy made clear the company thought the president had unfairly influenced the procurement process.

“I would say is that it’s fairly obvious that we feel pretty strongly that it was not adjudicated fairly,” he said. He added, “I think that we ended up with a situation where there was political interference. When you have a sitting president, who has shared openly his disdain for a company, and the leader of that company, it makes it really difficult for government agencies, including the DoD, to make objective decisions without fear of reprisal.”

Earlier this week, the company filed paperwork to depose the president and Secretary of Defense, Mark Esper.

The entire statement from the court today halting the JEDI project:

**SEALED**OPINION AND ORDER granting [130] Motion for Preliminary Injunction, filed by plaintiff. The United States, by and through the Department of Defense, its officers, agents, and employees, is hereby PRELIMINARILY ENJOINED from proceeding with contract activities under Contract No. HQ0034-20-D-0001, which was awarded under Solicitation No. HQ0034-18-R-0077, until further order of the court.

Pursuant to RCFC 65(c), plaintiff is directed to PROVIDE security in the amount of $42 million for the payment of such costs and damages as may be incurred or suffered in the event that future proceedings prove that this injunction was issued wrongfully.

As such, on or before 2/20/2020, plaintiff is directed to FILE a notice of filing on the docket in this matter indicating the form of security obtained, and plaintiff shall PROVIDE the original certification of security to the clerk of court. The clerk shall HOLD the security until this case is closed.

On or before 2/27/2020, the parties are directed to CONFER and FILE a notice of filing attaching a proposed redacted version of this opinion, with any competition-sensitive or otherwise protectable information blacked out. Signed by Judge Patricia E. Campbell-Smith.

Trump administration aims to protect GPS with new exec order

By Danny Crichton

GPS increasingly runs the entire planet. Supply chains, oceanic shipping, port docking and even our daily movements in cars, on bikes and walking around cities is dependent on a constellation of satellites hovering above us to make all this activity work in synchronicity.

Increasingly though, GPS is under attack. GPS spoofing, where the signals from GPS satellites are spoofed to send false data, can prevent devices from getting an accurate location, or any location at all. One of our TechCrunch contributors, Mark Harris, wrote a great piece in the MIT Technology Review about a recent spate of spoofing incidents in Shanghai, where shipping vessels would suddenly jump around the harbor as different signals got picked up.

In addition to more direct attacks on GPS, the monopoly of the U.S. GPS system is also under increasing strain. China has launched its own satellite system known as Beidou, and other countries like Russia, Japan and India, as well as the European Union, are increasingly attempting to augment America’s system with their own technology.

GPS is one technology of a field known as Positioning, Navigation, and Timing services (PNT). GPS is perhaps best known for its ability to pinpoint a device on a map, but it is also crucial in synchronizing clocks, particularly in extremely sensitive operations where milliseconds are crucial.

The increasing economic importance of the technology, along with the increasing risk it faces from bad actors, has forced the Trump administration to act. In a new executive order signed yesterday, the administration created a framework for the Department of Commerce to take the lead in identifying threats to America’s existing PNT system, and also ensures that procurement processes across the government take those threats into account.

This process comes in the form of “PNT profiles,” which the executive order described:

The PNT profiles will enable the public and private sectors to identify systems, networks, and assets dependent on PNT services; identify appropriate PNT services; detect the disruption and manipulation of PNT services; and manage the associated risks to the systems, networks, and assets dependent on PNT services. Once made available, the PNT profiles shall be reviewed every 2 years and, as necessary, updated.

In other words, these profiles are designed to ensure that systems work in concert with each other and are authenticated, so that systems don’t have (obvious) security holes in their design.

That’s a good first step, but unlikely to move the needle in protecting this infrastructure. Booz Allen Hamilton Vice President Kevin Coggins, who runs the firm’s GPS resilience practice, explained to me last year that “In a system where you just blindly integrate these things and you don’t have an architecture that takes security into account … then you are just increasing your threat surface.” PNT profiles could cut down on that surface area for threats.

In a new statement regarding Trump’s executive order, Coggins said that:

As a next step, the federal government should consider cross-industry standards that call for system diversity, spectral diversity, and zero-trust architectures.

System diversity addresses the dependence on a single system, such as GPS – some PNT alternatives have a dependence on GPS, therefore will fail should GPS become disrupted.

Spectral diversity involves using additional frequencies to carry PNT information – such as in systems using eLORAN or multi-GNSS – rather than just having a single frequency that is easy to target.

Finally, zero-trust architectures would enable PNT receivers to validate navigation and timing signals prior to using them – rather than blindly trusting what they are told.

This area of security has also gotten more venture and startup attention. Expect more action from all parties as these emerging threats to the economy are fully taken into account.

The US is charging Huawei with racketeering

By Danny Crichton

Ratcheting up its pressure campaign against Huawei and its affiliates, the Department of Justice and the FBI announced today that it has brought 16 charges against Huawei in a sprawling case with major geopolitical implications (you can read the full 56-page indictment here)

Huawei is being charged with conspiracy to violate the Racketeer Influenced and Corrupt Organizations Act (RICO) statute. The DoJ alleges that Huawei and a number of its affiliates used confidential agreements with American companies over the past two decades to access the trade secrets of those companies, only to then misappropriate that intellectual property and use it to fund Huawei’s business.

An example of this activity is provided in the indictment. Described as “Company 1,” Huawei is alleged to have stolen source code for Company 1’s routers, which it then used in its own products. Given the context, it is highly likely that Company 1 is Cisco, which in the indictment is summarized as “a U.S. technology company headquartered in the Northern District of California.”

Huawei is also alleged to have engaged in more simple forms of industrial espionage. While at a trade show in Chicago, a Huawei-affiliated engineer “… was discovered in the middle of the night after the show had closed for the day in the booth of a technology company … removing the cover from a networking device and taking photographs of the circuitry inside. Individual-3 wore a badge listing his employer as ‘Weihua,’ HUAWEI spelled with its syllables reversed.” Huawei said that the individual in question did so in a personal capacity.

In one case, a technology company looking for a partnership with Huawei sent over a presentation deck with confidential information about its business in order to generate commercial interest with Huawei. From the indictment:

“Immediately upon receipt of the slide deck, each page of which was marked ‘Proprietary and Confidential’ by Company 6, HUAWEI distributed the slide deck to HUAWEI engineers, including engineers in the subsidiary that was working on technology that directly competed with Company 6’s products and services. These engineers discussed developments by Company 6 that would have application to HUAWEI’s own prototypes then under design. “

Together, the indictment lists multiple examples of Huawei’s alleged conspiracy to pilfer U.S. intellectual property.

According to the statement published by the Department of Justice, “As part of the scheme, Huawei allegedly launched a policy instituting a bonus program to reward employees who obtained confidential information from competitors. The policy made clear that employees who provided valuable information were to be financially rewarded.” Per the indictment:

A “competition management group” was tasked with reviewing the submissions and awarding monthly bonuses to the employees who provided the most valuable stolen information.

In addition to conspiracy, Huawei and the defendants are charged with lying to federal investigators and obstructing the investigation into the company’s activity. Per the indictment:

For example, an official HUAWEI manual labeled “Top Secret” instructed certain individuals working for HUAWEI to conceal their employment with HUAWEI during encounters with foreign law enforcement officials.

Furthermore, Huawei has been charged in connection with its activities in countries like Iran and North Korea. The DoJ’s statement alleges that Huawei used code words and carefully selected local partners to conceal its activities in these states in order to avoid international sanctions that are placed on the two countries. It also alleges that the company and its representatives lied to Congressional investigators when asked about the company’s financial activities in the two countries.

Among the defendants is Meng Wanzhou, the CFO of Huawei who has been under house arrest in Canada while facing charges of fraud.

The Trump administration has made targeting Huawei a major priority, attempting to block its access to Western markets. The administration’s efforts have mostly been fruitless thus far, with both the United Kingdom and Germany in recent weeks allowing the company’s technology products into their telecommunications networks. We have more coverage on these initiatives in an article TechCrunch published this morning:

The full list of defendants include Huawei Technologies Co., Ltd., Huawei Device Co., Ltd., Huawei Device Usa Inc., Futurewei Technologies, Inc., Skycom Tech Co., Ltd., and Wanzhou Meng.

Huawei representatives didn’t immediately respond to a request for comment.

Updated 145pm EST to include additional details from the indictment.

Catching up on China’s tech influence operations in America

By Danny Crichton

It’s been a dizzying few weeks following all the China news emanating from Washington DC these days. While a “phase one” trade deal with China has been signed and appears to be moving forward as of a month ago (we covered the origins of this trade war extensively on TechCrunch in 2018 and 2019), it has also become clear that the Trump administration and its various agencies are aggressively targeting China on a variety of fronts.

Here’s what’s been happening with startup funding, Huawei, university research labs, and cybersecurity breaches.

More challenges for startups fundraising Chinese dollars

As of a few weeks ago, the Trump administration completed the final rulemaking around its modernization of foreign investment rules. Those rules went into force today, and will help to define what startups can take money from which foreign nationals. Those rules will now be used by CFIUS – the Committee on Foreign Investment in the United States — which has authority to rule over major venture transactions.

Martin Chorzempa of the Peterson Institute for International Economics wrote an extensive overview of what’s changing here. The closest summary is that Silicon Valley startups that take significant money from overseas investors (significant here is generally about percentage ownership of a company rather than total dollars) will increasingly need to go through national security reviews in DC, which can vastly delay the closing of venture rounds.

While China is certainly in the crosshairs of these new rules, other investors have been hit by them as well. SoftBank’s Vision Fund, which had a very bad quarter this week, is also a target under these new rules, complicating that fund’s future investments in America.

Some firms though are preparing for the long haul. Sequoia hired a major CFIUS veteran to be its general counsel last year, and from what I hear, other venture firms are providing more advice to founders to actively avoid international investors that might trigger these sorts of national security reviews in the first place.

All this of course is in the context of a collapse in Chinese venture capital, which was already in dire straights even before the coronavirus situation the past few weeks put a massive brake on the Chinese economy. Chinese VC dollars flowing into the Valley hasn’t stopped, but it is a trickle from the sloshing free trade days of just a few years ago.

Huawei is coming to the West, despite the wishes of the Trump administration

The Trump administration has made it a high-priority to shut Huawei out of Western telecom systems. It first tried to do that by essentially shutting the company down along with China’s ZTE by banning the two companies from receiving U.S. export licenses to American technology critical to their products. That set of moves ultimately created blowback for the administration a few years ago and galvanized Xi Jinping and the Chinese government to create more indigenous devices.

The Trump administration is continuing to lose its war against Huawei though. In recent weeks, both the United Kingdom and Germany have indicated that they will accept Huawei equipment within their next-generation telecom networks, despite immense pressure from U.S. defense and intelligence officials pushing against that decision.

Part of the challenge for the Trump administration is that it isn’t even pushing forward with one voice. The Defense Department has actually supported Huawei’s position, arguing that fighting Huawei will ultimately undermine American chip market leaders like Intel, who need Huawei as a customer of their chips to continue funding their R&D efforts.

Meanwhile, Huawei late last week sued TechCrunch parent parent parent parent parent company Verizon (okay, maybe it’s only like three levels of corporate bureaucracy between us and them — I’ve honestly lost track in the reshuffles) over patent infringement. As the 5G race continually bubbles (it’s not really heating up despite attempts by telecoms to say otherwise), expect more of these patent fights.

Fighting Chinese influence in American university research labs

Most notably here, prosecutors at the Department of Justice charged Harvard University’s chairman of the department of chemistry Charles Lieber with failing to disclose payments he received from China totaling millions of dollars. Such disclosures are required since Lieber accepted federal research dollars through programs run by the National Institutes of Health and Department of Defense.

The payments described in the department’s complaint included a monthly honorarium of $50,000, hundreds of thousands of dollars for annual living expenses, and millions of dollars to build out a research lab at Wuhan University of Technology as a “Strategic Scientist.” Two other scientists were named in the complaint as well.

That’s not all though. We learned this morning that the Department of Education has launched new investigations into Harvard and Yale to look at billion of dollars of overseas funding for those universities over the past few years, attempting to triangulate exactly who gave money to those institutions and why. The Wall Street Journal reported that the prime targets of funding come from China and Saudi Arabia.

Finally, Aruna Viswanatha and Kate O’Keeffe of the Wall Street Journal compiled a number of university-level investigations, finding that dozens more scientists and other academics have failed to disclose overseas ties and funding, mostly from China.

These investigations have become a higher priority as the U.S. government increasingly feels that China has built an apparatus for stealing U.S. technology, particularly at the frontiers of science.

Justice indicts four Chinese nationals over Equifax breach

Finally, the other major story in the China influence operations beat is that the Department of Justice indicted four Chinese nationals over the 2017 Equifax breach that led to the loss of data for more than 150 million Americans.

According to the department’s complaint, four Chinese military hackers associated with China’s People’s Liberation Army broke into Equifax’s systems using an unpatched security vulnerability in Apache Struts.

The department’s indictment serves two purposes, even though the four alleged individuals in the indictment are highly unlikely to ever be prosecuted (China and the U.S. do not have an extradition treaty, nor is China likely to hand over the individuals to the U.S. justice system).

First, the indictments serve notice to China that the U.S. is watching its actions, and is able to determine with a high degree of precision who is breaking into these vulnerable technology systems and what they are taking. That’s important, as there are serious concerns in the defense community about identifying actors in cyberwar.

Second, the charges also help to connect the Equifax case to a similar breach at the government’s Office of Personnel Management, in which data on millions of government workers — including defense and intelligence personnel — was believed to be leaked to Chinese state-backed hackers.

Fighting Chinese influence has become a major project of DC officials, and therefore we can expect to see even more news on this front throughout the year, particularly with an election coming up in November.

A new Senate bill would create a US data protection agency

By Zack Whittaker

Europe’s data protection laws are some of the strictest in the world, and have long been a thorn in the side of the data-guzzling Silicon Valley tech giants since they colonized vast swathes of the internet.

Two decades later, one Democratic senator wants to bring many of those concepts to the United States.

Sen. Kirsten Gillibrand (D-NY) has published a bill which, if passed, would create a U.S. federal data protection agency designed to protect the privacy of Americans and with the authority to enforce data practices across the country. The bill, which Gillibrand calls the Data Protection Act, will address a “growing data privacy crisis” in the U.S., the senator said.

The U.S. is one of only a few countries without a data protection law (along with Venezuela, Libya, Sudan and Syria). Gillibrand said the U.S. is “vastly behind” other countries on data protection.

Gillibrand said a new data protection agency would “create and meaningfully enforce” data protection and privacy rights federally.

“The data privacy space remains a complete and total Wild West, and that is a huge problem,” the senator said.

The bill comes at a time when tech companies are facing increased attention by state and federal regulators over data and privacy practices. Last year, Facebook settled a $5 billion privacy case with the Federal Trade Commission, which critics decried for failing to bring civil charges or levy any meaningful consequences. Months later, Google settled a child privacy case that cost it $170 million — about a day’s worth of the search giant’s revenue.

Gillibrand pointedly called out Google and Facebook for “making a whole lot of money” from its empires of data, she wrote in a Medium post. Americans “deserve to be in control of your own data,” she wrote.

At its heart, the bill would — if signed into law — allow the newly created agency to hear and adjudicate complaints from consumers and declare certain privacy invading tactics as unfair and deceptive. As the government’s “referee,” the agency would let it take point on federal data protection and privacy matters, such as launching investigations against companies accused of wrongdoing. Gillibrand’s bill specifically takes issue with “take-it-or-leave-it” provisions, notably websites that compel a user to “agree” to allowing cookies with no way to opt-out. (TechCrunch’s parent company Verizon Media enforces a “consent required” policy for European users under GDPR, though most Americans never see the prompt.)

Through its enforcement arm, the would-be federal agency would also have the power to bring civil action against companies and fine companies of egregious breaches of the law up to $1 million a day, subject to a court’s approval. The bill would transfer some authorities from the Federal Trade Commission to the new data protection agency.

Gillibrand’s bill lands just a month after California’s consumer privacy law took effect, more than a year after it was signed into law. The law extended much of Europe’s revised privacy laws, known as GDPR, to the state. But Gillibrand’s bill would not affect state laws like California’s, her office confirmed in an email.

Privacy groups and experts have already offered positive reviews.

Caitriona Fitzgerald, policy director at the Electronic Privacy Information Center, said the bill is a “bold, ambitious proposal.” Other groups, including Color of Change and Consumer Action, praised the effort to establish a federal data protection watchdog.

Michelle Richardson, director of the Privacy and Data Project at the Center for Democracy and Technology, reviewed a summary of the bill.

“The summary seems to leave a lot of discretion to executive branch regulators,” said Richardson. “Many of these policy decisions should be made by Congress and written clearly into statute.” She warned it could take years to know if the new regime has any meaningful impact on corporate behaviors.

Gillibrand is the only sponsor on the bill. But given the appetite of some lawmakers on both sides of the aisles to crash the Silicon Valley data party, it’s likely to pick up bipartisan support in no time.

Whether it makes it to the president’s desk without a fight from the tech giants remains to be seen.

Financing for social impact and climate businesses gets a billion dollar boost with new KKR fund

By Jonathan Shieber

KKR, the multi-billion dollar, multi-strategy investment firm, has closed on over $1.3 billion for companies focused on social and environmental challenges.

KKR Global Impact says its fund will focus on identifying and investing in companies worldwide where preformance and social impact are intrinsically aligned. Specifically, the fund will invest in companies in the lower middle market that contribute toward progress along the United Nations Sustainable Development Goals.

“The UN SDGS were developed to mobilize citizens, policymakers, technologists and investors to address global challenges. As investors, we have a significant role to play in building businesses that contribute to SDG solutions while also generating financial returns for our fund investors by doing so,” said Robert Antablin and Ken Mehlman, KKR Partners and Co-Heads of KKR Global Impact, in a statement. 

It’s a nice chunk of change that could potentially fund companies in the re-emerging climate and sustainability space, but it’s dwarfed by the $13.9 billion that KKR raised in 2017 for its Americas fund, or the $7 billion that the firm has to invest in infrastructure from its latest investment vehicle.

Mehlman’s role in promoting environmental and sustainable development stewardship belies his role as a senior administration official during George W. Bush’s tenure in the White House. He was appointed director of the Bush Administration’s Office of Political Affairs in 2000 and served in several administrative capacities both for the Republican Party within and outside of the White House.

Environmentalists have a pretty bleak assessment of the Bush years in office.

“[President Bush] has undone decades if not a century of progress on the environment,” Josh Dorner, a spokesman for the Sierra Club, one of America’s largest environmental groups, said to the Guardian about the Bush administration’s environmental record back in a 2008 interview.

“The Bush administration has introduced this pervasive rot into the federal government which has undermined the rule of law, undermined science, undermined basic competence and rendered government agencies unable to do their most basic function even if they wanted to.”

Twenty years later, Mehlman is working in the private sector on financing companies involved in mitigating and adapting  the world to the climate crisis that inactivity from the administration he helped shepherd into office has exacerbated.

Other investment areas the KKR fund will focus on include responsible waste management, using technology to enhance safety, mobility and sustainability, creating more sustainable products and services and upgrading declining industry and infrastructure.

KKR launched its global impact business two years ago and its 12 person team has invested in Barghest Building Performance, Ramky Environ Engineers, KnowBe4, Burning Glass, and the construction of a wastewater treatment plant.

In addition to the external commitments KKR received, the firm said it will invest $130 million of capital in the fund through its own balance sheet.

“We are thrilled to see our investors’ shared enthusiasm for the tremendous opportunity we see ahead for KKR Global Impact and will build on this to help set the new standard across investing, value creation and measuring success in the space,” said Alisa Amarosa Wood, KKR Partner and Head of KKR’s Private Market Products Group. 

KKR did not respond to a request for comment about Mehlman’s previous work in the Bush Administration.

FTC votes to review influencer marketing rules & penalties

By Josh Constine

Undisclosed influencer marketing posts on social media should trigger financial penalties, according to a statement released today by the Federal Trade Commission’s Rohit Chopra. The FTC has voted 5-0 to approve a Federal Register notice calling for public comments on questions related to whether The Endorsement Guides for advertising need to be updated.

“When companies launder advertising by paying an influencer to pretend that their endorsement or review is untainted by a financial relationship, this is illegal payola,” Chopra writes. “The FTC will need to determine whether to create new requirements for social media platforms and advertisers and whether to activate civil penalty liability.”

Currently the non-binding Endorsement Guides stipulate that “when there is a connection between an endorser and a seller of an advertised product that could affect the weight or credibility of the endorsement, the connection must be clearly and conspicuously disclosed.” In the case of social media, that means creators need to note their post is part of an “ad,” “sponsored” content or “paid partnership.”

But Chopra wants the FTC to consider making those rules official by “Codifying elements of the existing endorsement guides into formal rules so that violators can be liable for civil penalties under Section 5(m)(1)(A) and liable for damages under Section 19.” He cites weak enforcement to date, noting that in the case of department store Lord & Taylor not insisting 50 paid influencers specify their posts were sponsored, “the Commission settled the matter for no customer refunds, no forfeiture of ill-gotten gains, no notice to consumers, no deletion of wrongfully obtained personal data, and no findings or admission of liability.”

Strangely, Chopra fixates on Instagram’s Branded Content Ads that let marketers pay to turn posts by influencers tagging brands into ads. However, these ads include a clear “Sponsored. Paid partnership with [brand]” and seem to meet all necessary disclosure requirements. He also mentions concerns about sponcon on YouTube and TikTok.

Additional targets of the FTC’s review will be use of fake or incentivized reviews. It’s seeking public comment on whether free or discounted products influence reviews and should require disclosure, how to handle affiliate links and whether warnings should be posted by advertisers or review sites about incentivized reviews. It also wants to know about how influencer marketing affects and is understood by children.

Chopra wisely suggests the FTC focus on the platforms and advertisers that are earning tons of money from potentially undisclosed influencer marketing, rather than the smaller influencers themselves who might not be as well versed in the law and are just trying to hustle. “When individual influencers are able to post about their interests to earn extra money on the side, this is not a cause for major concern,” he writes, but “when we do not hold lawbreaking companies accountable, this harms every honest business looking to compete fairly.”

While many of the social media platforms have moved to self-police with rules about revealing paid partnerships, there remain gray areas around incentives like free clothes or discount rates. Codifying what constitutes incentivized endorsement, formally demanding social media platforms to implement policies and features for disclosure and making influencer marketing contracts state that participation must be disclosed would all be sensible updates.

Society has enough trouble with misinformation on the internet, from trolls to election meddlers. They should at least be able to trust that if someone says they love their new jacket, they didn’t secretly get paid for it.

White House requests $15 billion to establish Space Force

By Devin Coldewey

The Space Force will be taking one giant leap towards reality if the Department of Defense’s proposed budget and operations go through. $15 billion is requested, which would fund a number of missions and help establish the more than 10,000 personnel expected to join the new military branch over the next year.

Estimates for how much it would cost to really establish the Space Force have varied widely, due in some part to the original haziness of the vision, but also because even had that vision been crystal clear, the timing and method of accomplishing it would be the subject of major debate.

In the end the Pentagon has decided in true military style to strike fast and hard at this, which for all they know may be the last opportunity to do so with this administration. Who knows what the new year may bring?

Its request, detailed as part of an overall budget proposal of $705 billion, would be for $15B in FY2021. These funds would be used to “consolidate the preponderance of space missions, units, resources, and personnel from the existing Military Services into the new U.S. Space Force,” with a goal of doing so completely by 2024.

This isn’t exactly new funding as much as shifted over from elsewhere within the Air Force, under which the new command will exist, and which currently has authority over the most of the armed forces’ space-related missions, assets and personnel. For reference, $15B is about 60 percent of the size of the proposed NASA budget (which to be clear is not coming out of the DoD’s pocket). Notably the National Reconnaissance Office, which essentially presides over space-based spying, will not transfer over.

Under the proposed budget and transfers, the Space Force will grow from a ragtag group of 122 civilians and 38 military personnel to only a few shy of 10,000 — about 35 percent civilian and the remainder military. These are mostly going to be from the Air Force’s Space Command, which is essentially being eaten by Space Force.

Three projects are given line items in the budget. $1.6B for three launches for national security purposes (unlikely to be detailed further given their nature), $1.8B for two launches of GPS satellites and related systems, and $2.5B to continue development of the Next-Generation Overhead Persistent Infrared project for missile detection.

As with other budgets we’ve covered in the last week, this one is a proposal, not an allocation; Congress will be the ones to make the final decision on amounts, though it seems unlikely that the Space Force will be derailed at this stage. Since it is largely taking custody of existing programs and service members, with new HQs and projects still years off, it seems relatively safe from cuts.

A US House candidate says she was hacked — now she’s warning others

By Zack Whittaker

“I cannot think of a reason not to share this with the public,” said Brianna Wu tweeted.

“Two of my non-campaign Google accounts were compromised by someone in Russia,” she said.

Wu isn’t just any other target. As a Democratic candidate for the U.S. House of Representatives in Massachusetts’ 8th District, she has a larger target on her back for hackers than the average constituent. And as a former software engineer, she knows all too well the cybersecurity risks that come along with running for political office.

But the breach of two of her non-campaign Google accounts was still a wake-up call.

Wu said she recently discovered that the two accounts had been breached. One of the accounts was connected to her Nest camera system at home, and the other was her Gmail account she used during the Gamergate controversy, during which Wu was a frequent target of vitriol and death threats. TechCrunch agreed to keep the details of the breach off the record as to not give any potential attackers an advantage. Attribution in cyberattacks, however, can be notoriously difficult because hackers can mask their tracks using proxies and other anonymity tools.

“I don’t believe anyone in Russia is targeting me specifically. I think it’s more likely they target everyone running for office,” she tweeted.

Wu said that both of her accounts had “solid protection measures” in place, including “unique, randomly generated passwords for both accounts.” She said that she reported the intrusions to the FBI.

“The worry is obviously that it could hurt the campaign,” she told TechCrunch. But she remains concerned that it could be an “active measure,” a term often used to describe Russian-led political interference in U.S. politics.

Politicians and political candidates are frequently targeted by hackers both in the U.S. and overseas. During the 2016 presidential election, Democratic candidate Hillary Clinton’s campaign manager John Podesta had his personal email account hacked and thousands of emails published by WikiLeaks. The recently released report by Special Counsel Robert Mueller blamed hackers working for Russian intelligence for the intrusion as part of a wider effort to discredit then-candidate Clinton and get President Trump elected.

Yet to this day, political campaigns remain largely responsible for their own cybersecurity.

“There is only so much the feds can do here, given the sheer size of the candidate pool for federal office,” said Joseph Lorenzo Hall, an election security expert and senior vice president at the Internet Society.

Hall said much of the federal government’s efforts have been on raising awareness and on “low-hanging fruit,” like enabling two-factor authentication. Homeland Security continues to brief both parties to the major cybersecurity threats ahead of voting later in November, and the FBI has online resources for political campaigns.

It’s only been in the past few months that tech companies have been allowed to step in to help.

Fearing a repeat of 2016, the Federal Elections Commission last year relaxed the rules to allow political campaigns to receive discounted cybersecurity help. That has also allowed companies like Cloudflare to enter the political campaign space, offering cybersecurity services to campaigns — which was previously considered a campaign finance violation.

It’s not a catch-all fix. A patchwork of laws and rules across the U.S. make it difficult for campaigns to prioritize internal cybersecurity efforts. It’s illegal in Maryland, for example, to use campaign finances for securing the personal accounts of candidates and their staff — the same kind of accounts that hackers used to break into Podesta’s email account in 2016. It’s an attack that remains in hackers’ arsenals. Just last year, Microsoft found Iranian-backed hackers were targeting personal email accounts “associated” with a 2020 presidential candidate — which later transpired to be President Trump’s campaign.

Both of the major U.S. political parties have made efforts to bolster cybersecurity at the campaign level. The Democrats recently updated their security checklist for campaigns and published recommendations for countering disinformation, and the Republicans have put on training sessions to better educate campaign officials.

But Wu said that the Democrats could do more to support campaign cybersecurity, and that she was speaking out to implore others who are running for Congress to do more to bolster their campaign’s cybersecurity.

“There is absolutely no culture of information security within the Democratic Party that I have seen,” said Wu. Fundraising lists are “freely swapped in unencrypted states,” she said, giving an example.

“There is generally not a culture of updating software or performing security audits,” she said. “The fact that this is not taken seriously is really underscored by Iowa and the Shadow debacle,” she said, referring to the Iowa caucus last week, in which a result-reporting app failed to work. It was later reported that the app, built by Shadow Inc., had several security flaws that made it vulnerable to hacking.

Spokespeople for the FBI and the Democratic Congressional Campaign Committee did not respond to a request for comment prior to publication.

“Infosec is expensive, and I know for many campaigns it may seem like a low priority,” Wu told TechCrunch.

“But how can we lead the country on cybersecurity issues if we don’t hold ourselves to the same standards we’re asking the American people to follow?” she said.

Trump administration slashes basic science research while boosting space, AI and quantum tech funding

By Jonathan Shieber

The new fiscal year 2021 budget proposal from the Trump administration would increase funding for research and development by $142 billion over the administration’s previous year’s budget, but will still reduce overall spending for science and technology from alternative proposals coming from the U.S. House of Representatives.

Basic science funding would be hard hit under the Trump administration priorities.

A rundown of all of the programs that would be cut under the administration’s budget was published by Science Magazine and it includes:

  • National Institutes of Health: a cut of 7%, or $2.942 billion, to $36.965 billion
  • National Science Foundation (NSF): a cut of 6%, or $424 million, to $6.328 billion
  • Department of Energy’s (DOE’s) Office of Science: a cut of 17%, or $1.164 billion, to $5.760 billion
  • NASA science: a cut of 11%, or $758 million, to $6.261 billion
  • DOE’s Advanced Research Projects Agency-Energy: a cut of 173%, which would not only eliminate the $425 million agency, but also force it to return $311 million to the U.S. Department of the Treasury
  • U.S. Department of Agriculture’s (USDA’s) Agricultural Research Service: a cut of 12%, or $190 million, to $1.435 billion
  • National Institute of Standards and Technology: a cut of 19%, or $154 million, to $653 million
  • National Oceanic and Atmospheric Administration: a cut of 31%, or $300 million, to $678 million
  • Environmental Protection Agency science and technology: a cut of 37%, or $174 million, to $318 million
  • Department of Homeland Security science and technology: a cut of 15%, or $65 million, to $357 million
  • U.S. Geological Survey: a cut of 30%, or $200 million, to $460 million

However, certain areas where venture investors and startups spend a lot of time should see a funding boost. These include new money for research and development in industries developing new machine learning and quantum computing technologies.

Artificial intelligence allocations across the National Science Foundation, the Department of Energy’s Office of science, and the Defense Advanced Research Projects Agency and the Department of Defense’s Joint AI Center will reach a combined $1.724 billion — with portions of an additional $150 million allocation for the Department of Agriculture and the National Institutes of Health going to AI research.

Quantum information science is another area that’s set for a windfall of government dollars under the proposed Trump Administration Budget. The National Science Foundation will receive $210 million for quantum research, while the Department of Energy will receive a $237 million boost and an additional carve out of $25 million for the Depart of Energy to begin development of a nationwide Quantum Internet.

“Quantum computing, networking and sensing technologies are areas of incredible potential,” said Paul Dabbar, the under secretary for science at the Department of Energy.

As part of this development, Dabbar pointed to the work underway at the University of Chicago, where partners including the Argonne National Laboratory, Fermi Laboratory and the university have already launched a 52 mile quantum communication loop in Chicago.

There are plans underway to create six quantum internet nodes in the midwest and another node in Long Island near New York City to create a Northeastern quantum network hub.

“This will be the backbone of a national quantum internet extending coast to coast and border to border,” said Dabbar. “If we don’t, others will do it,” he said. “China and the EU have announced plans for investments in the area.”

Space is another area where spending will see a boost, under the Trump budget.

A key part of the package is a 12 percent boost to the budget of the National Aeronautics and Space Administration, as the administration aims to get astronauts back on the surface of the moon by 2024. In all, the new budget will add $3 billion to funding for NASA to develop things like human landers and other technologies to capitalize on the potential assets and strategic importance of space. In all NASA will receive $25.2 billion, while the newly created Space Force will see an allocation of $15.4 billion in the new budget.

The budget will double research and development spending for quantum information science and non-defense artificial intelligence by the 2022 fiscal year, according to a statement from the administration.

Much of the administration’s budget seems focused on spending to catch up in areas where the U.S. may be losing its technological edge. China already spends tens of billions of dollars on research in both quantum computing and artificial intelligence.

While spending on quantum computing and artificial intelligence advances, the Trump Administration continues to slash budgets in other areas dependent on scientific study — where the discoveries of the scientific community and their implications contradict the political wishes of the President.

That includes the Environmental Protection Agency, which would see its total budget slashed by 26.5 percent over the next year. The Department of Health and Human Services would see its budget allocation shrink by 9 percent — although the administration actually plans to avoid cutting the budget for combating infections diseases through the Centers for Disease Control and Prevention.

Few of these allocations will actually make it through the Congressional budgeting process, since the Democrats control the House of Representatives and the most draconian parts of the budget proposed by the administration couldn’t even pass a Congress controlled by Republicans.

Justice Dept. charges four Chinese military hackers over the Equifax data breach

By Zack Whittaker

U.S. prosecutors have charged four Chinese military hackers over the 2017 cyberattack at Equifax, which resulted in a data breach involving more than 147 million credit reports.

The nine-charge indictment was announced Monday against Wu Zhiyong, Wang Qian, Xu Ke, and Liu Lei. The Justice Department said the four work for the Chinese People’s Liberation Army. The hackers are said to be part of the APT10 group, a notorious Beijing-backed hacking group that was previously blamed for hacking into dozens of major U.S. companies and government systems, including HPE, IBM, and NASA’s Jet Propulsion Laboratory.

Attorney general William Barr said it was the latest in a long line of cyberattacks launched by China, which also included the targeting of health insurance giant Anthem, the Marriott Starwood hotel breach, and the U.S. Office of Personnel Management.

“This is the largest theft of sensitive personal identifiable information by state-sponsored hackers ever recorded,” said FBI deputy director David Bowdich, at a presser in Washington DC.

“Today, we hold [the Chinese military] hackers accountable for their criminal actions, and we remind the Chinese government that we have the capability to remove the Internet’s cloak of anonymity and find the hackers that nation repeatedly deploys against us,” said Barr.

Four Chinese military hackers are accused of hacking into Equifax in 2017. (Image: Justice Dept./handout

Equifax revealed the data breach in September 2017, months after it discovered hackers had broken into its systems.

An investigation showed the company failed to patch a web server it knew was vulnerable for weeks, which let hackers crash the servers and steal massive amounts of personal data. Names, addresses, Social Security numbers and more — and millions more driver license and credit card numbers were stolen in the breach. The data breach also affected British and Canadian nationals.

Equifax chief executive Richard Smith retired shortly after the breach, but didn’t escape criticism. Sen. Chuck Schumer called the breach and the credit giant’s handling of the aftermath “one of the most egregious examples of corporate malfeasance since Enron.”

Equifax later settled with the Federal Trade Commission to pay at least $575 million in fines.

Mark Begor, the credit giant’s current chief executive, said he was “grateful” for the FBI and Justice Department’s work to secure the indictments.

A spokesperson for the Chinese Consulate in New York did not respond to a request for comment.

UK public sector failing to be open about its use of AI, review finds

By Natasha Lomas

A report into the use of artificial intelligence by the U.K.’s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens’ lives.

Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare — with health minister Matt Hancock setting out a tech-fueled vision of “preventative, predictive and personalised care” in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of “healthtech” apps and services.

He has also personally championed a chatbot startup, Babylon Health, that’s using AI for healthcare triage — and which is now selling a service in to the NHS.

Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology — and London’s Met Police switching over to a live deployment of the AI technology just last month.

However the rush by cash-strapped public services to tap AI “efficiencies” risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns — all of which require transparency into AIs if there’s to be accountability over automated outcomes.

The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.

Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.

The court objected to a lack of transparency about how the system functions, as well as the associated lack of controlability — ordering an immediate halt to its use.

The U.K. parliamentary committee that reviews standards in public life has today sounded a similar warning — publishing a series of recommendations for public-sector use of AI and warning that the technology challenges three key principles of service delivery: openness, accountability and objectivity.

“Under the principle of openness, a current lack of information about government use of AI risks undermining transparency,” it writes in an executive summary.

“Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.”

“This review found that the government is failing on openness,” it goes on, asserting that: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

In 2018, the UN’s special rapporteur on extreme poverty and human rights raised concerns about the U.K.’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale — warning then that the impact of a digital welfare state on vulnerable people would be “immense,” and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.

Per the committee’s assessment, it is “too early to judge if public sector bodies are successfully upholding accountability.”

Parliamentarians also suggest that “fears over ‘black box’ AI… may be overstated” — and rather dub “explainable AI” a “realistic goal for the public sector.”

On objectivity, they write that data bias is “an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias.”

The use of AI in the U.K. public sector remains limited at this stage, according to the committee’s review, with healthcare and policing currently having the most developed AI programmes — where the tech is being used to identify eye disease and predict reoffending rates, for example.

“Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage,” the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are “examining how AI can increase efficiency in service delivery.”

It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care — noting the example of Hampshire County Council trialing the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers, and points to a Guardian article which reported that one-third of U.K. councils use algorithmic systems to make welfare decisions.

But the committee suggests there are still “significant” obstacles to what they describe as “widespread and successful” adoption of AI systems by the U.K. public sector.

“Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation,” it writes. “It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.”

The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.

“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” it suggests.

Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. “All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery,” the committee writes.

Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI — with the committee noting there are three sets of principles that could apply to the public sector, which is generating confusion.

“The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use,” it recommends.

It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies’ use of AI complies with the U.K. Equality Act 2010.

The committee is not recommending a new regulator should be created to oversee AI — but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.

It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI — supporting the government’s intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalisation.)

Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that “ensure that private companies developing AI solutions for the public sector appropriately address public standards.”

“This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements,” it suggests.

Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of “driving blind, with no control over who is in the AI driving seat.”

“This serious report sadly confirms what we know to be the case — that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector,” she said. “The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.

“Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”

California’s new privacy law is off to a rocky start

By Zack Whittaker

California’s new privacy law was years in the making.

The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.

But to say things are going well is a stretch.

Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.

Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”

“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.

“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.

Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.

“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.

PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.

But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.

Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.

(Image: Twitter/@jeanqasaur)

The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.

Others require sending in even more sensitive information just to prove it’s them.

Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.

Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.

As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.

Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.

The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.

(Screenshot: Zack Whittaker/TechCrunch)

TechCrunch alerted Mine — and the two requesters — to the security lapse.

“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”

For now, many startups have caught a break.

The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.

“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.

CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.

But with the looming threat of hefty fines just months away, time is running out for the non-compliant.

White House reportedly aims to double AI research budget to $2B

By Devin Coldewey

The White House is pushing to dedicate an additional billion dollars to fund artificial intelligence research, effectively doubling the budget for that purpose outside of Defense Department spending, Reuters reported today, citing people briefed on the plan. Investment in quantum computing would also receive a major boost.

The 2021 budget proposal would reportedly increase AI R&D funding to nearly $2 billion, and quantum to about $860 million, over the next two years.

The U.S. is engaged in what some describe as a “race” with China in the field of AI, though unlike most races this one has no real finish line. Instead, any serious lead means opportunities in business and military applications that may grow to become the next globe-spanning monopoly, a la Google or Facebook — which themselves, as quasi-sovereign powers, invest heavily in the field for their own purposes.

Simply doubling the budget isn’t a magic bullet to take the lead, if anyone can be said to have it, but deploying AI to new fields is not without cost and an increase in grants and other direct funding will almost certainly enable the technology to be applied more widely. Machine learning has proven to be useful for a huge variety of purposes and for many researchers and labs is a natural next step — but expertise and processing power cost money.

It’s not clear how the funds would be disbursed; It’s possible existing programs like federal Small Business Innovation Research awards could be expanded with this topic in mind, or direct funding to research centers like the National Labs could be increased.

Research into quantum computing and related fields is likewise costly. Google’s milestone last fall of achieving “quantum superiority,” or so the claim goes, is only the beginning for the science and neither the hardware nor software involved have much in the way of precedents.

Furthermore quantum computers as they exist today and for the foreseeable future have very few valuable applications, meaning pursuing them is only an investment in the most optimistic sense. However, government funding via SBIR and grants like those are intended to de-risk exactly this kind of research.

The proposed budget for NASA is also expected to receive a large increase in order to accelerate and reinforce various efforts within the Artemis Moon landing program. It was not immediately clear how these funds would be raised or from where they would be reallocated.

❌