Luxembourg’s National Commission for Data Protection (CNPD) has hit Amazon with a record-breaking €746 million ($887m) GDPR fine over the way it uses customer data for targeted advertising purposes.
Amazon disclosed the ruling in an SEC filing on Friday in which it slammed the decision as baseless and added that it intended to defend itself “vigorously in this matter.”
“Maintaining the security of our customers’ information and their trust are top priorities,” an Amazon spokesperson said in a statement. “There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed.
“We strongly disagree with the CNPD’s ruling, and we intend to appeal. The decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law, and the proposed fine is entirely out of proportion with even that interpretation.”
The penalty is the result of a 2018 complaint by French privacy rights group La Quadrature du Net, a group that claims to represent the interests of thousands of Europeans to ensure their data isn’t used by big tech companies to manipulate their behavior for political or commercial purposes. The complaint, which also targets Apple, Facebook Google and LinkedIn and was filed on behalf of more than 10,000 customers, alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.
La Quadrature du Net welcomed the fine issued by the CNPD, which “comes after three years of silence that made us fear the worst.”
“The model of economic domination based on the exploitation of our privacy and free will is profoundly illegitimate and contrary to all the values that our democratic societies claim to defend,” the group added in a blog post published on Friday.
The CNPD has also ruled that Amazon must commit to changing its business practices. However, the regulator has not publicly committed on its decision, and Amazon didn’t specify what revised business practices it is proposing.
The record penalty, which trumps the €50 million GDPR penalty levied against Google in 2019, comes amid heightened scrutiny of Amazon’s business in Europe. In November last year, the European Commission announced formal antitrust charges against the company, saying the retailer has misused its position to compete against third-party businesses using its platform. At the same time, the Commission a second investigation into its alleged preferential treatment of its own products on its site and those of its partners.
Imagine a world where no one’s privacy is breached, no faces are scanned into a gargantuan database, and no privacy laws are broken. This is a world that is fast approaching. Could companies simply dump the need for real-world CCTV footage, and switch to synthetic humans, acting out potential scenarios a million times over? That’s the tantalizing prospect of a new UK startup that has attracted funding from an influential set of investors.
UK-based Mindtech Global has developed what it describes as an end-to-end synthetic data creation platform. In plain English, its system can imagine visual scenarios such as someone’s behavior inside a store, or crossing the street. This data is then used to train AI-based computer vision systems for customers such as big retailers, warehouse operators, healthcare, transportation systems and robotics. It literally trains a ‘synthetic’ CCTV camera inside a synthetic world.
That last investor is significant. In-Q-Tel invests in startups that support US intelligence capabilities and is based in Arlington, Virginia…
Mindtech’s Chameleon platform is designed to help computers understand and predict human interactions. As we all know, current approaches to training AI vision systems require companies to source data such as CCTV footage. The process is fraught with privacy issues, costly, and time-consuming. Mindtech says Chameleon solves that problem, as its customers quickly “build unlimited scenes and scenarios using photo-realistic smart 3D models”.
An added bonus is that these synthetic humans can be used to train AI vision systems to weed out human failings around diversity and bias.
Mindtech CEO Steve Harris
Steve Harris, CEO, Mindtech said: “Machine learning teams can spend up to 80% of their time sourcing, cleaning, and organizing training data. Our Chameleon platform solves the AI training challenge, freeing the industry to focus on higher-value tasks like AI network innovation. This round will enable us to accelerate our growth, enabling a new generation of AI solutions that better understand the way humans interact with each other and the world around them.”
So what can you do with it? Consider the following: A kid slips from its parent’s hand at the mall. The synthetic CCTV running inside Mindtech’s scenario is trained thousands of times over how to spot it in real-time and alert staff. Another: a delivery robot meets kids playing in a street and works out how to how to avoid them. Finally: a passenger on the platform is behaving erratically too close to the rails – the CCTV is trained to automatically spot them and send help.
Nat Puffer, Managing Director (London), In-Q-Tel commented: “Mindtech impressed us with the maturity of their Chameleon platform and their commercial traction with global customers. We’re excited by the many applications this platform has across diverse markets and its ability to remove a significant roadblock in the development of smarter, more intuitive AI systems.”
Miles Kirby, CEO, Deeptech Labs said: “As a catalyst for deeptech success, our investment, and accelerator program supports ambitious teams with novel solutions and the appetite to build world-changing companies. Mindtech’s highly-experienced team are on a mission to disrupt the way AI systems are trained, and we’re delighted to support their journey.”
There is of course potential for darker applications, such a spotting petty theft inside supermarkets, or perhaps ‘optimising’ hard-pressed warehouse workers in some dystopian fashion. However, in theory, Mindtech’s customers can use this platform to rid themselves of the biases of middle-managers, and better serve customers.
Titan, a startup that is building a retail investment management platform aimed at the new generation of “everyday investors,” has closed on $58 million in a Series B round led by Andreessen Horowitz (a16z).
The financing comes just over five months after Titan raised $12.5 million in a Series A round led by General Catalyst, and brings the startup’s total raised since its 2017 inception to $75 million. It values the company at $450 million.
General Catalyst also put money in the Series B round, along with BoxGroup, Ashton Kutcher’s Sound Ventures and a group of professional athletes and celebrities including Odell Beckham Jr., Kevin Durant, Jared Leto and Will Smith.
The startup, which describes itself as “a new-guard active investment manager,“ launched its first investment strategy in February of 2018 and today has 30,000 users. Titan’s platform grew by 500% in the last 12 months, largely organically, according to the company, which expects to cross its first billion in assets under management later this year. At the time of its last raise in February, Titan co-founder and co-CEO Joe Percoco said the startup was approaching $500 million in assets under management and was cash flow positive last year.
“What Fidelity and its iconic mutual funds were for baby boomers, Titan is for new generations. Titan is the first DTC, mobile-first investment platform where everyday investors, irrespective of wealth, can have their capital actively managed by investment experts in long-term strategies,” Percoco said.
He went on to describe the mutual fund or an ETF as “fundamentally just a piece of technology for an investment manager to accept money from someone in order to invest in securities.” He likened that piece of technology to a VHS tape that “does the job, but is archaic for a few reasons.” Those reasons, he said, are that the investor is an “anonymized dollar value” and the products have layers of costs with high minimums and are difficult to create.
“The factory that creates the mutual fund itself is very old. The entire investment management industry is predicated on these VHS tapes,” Percoco said. “These are the archaic technologies being used. We’re rebuilding it entirely. Fidelity is an old factory. Titan is effectively a new factory.”
Image credits: Titan
On August 3, Titan plans to launch its cryptocurrency offering, which the company claims will be the first and only actively managed portfolio of cryptocurrency assets available to U.S. investors. At launch, Titan Crypto will be available to all U.S. residents except those with home addresses in New York. Access for NY-based residents will be provided once Titan’s custodial partner receives regulatory approval for the state’s jurisdiction.
Looking ahead, Titan said it plans to allow other investment managers to launch their products from its “factory.”
“The initial strategies on Titan’s platform are predominantly in stocks,” Percoco said. “We’re already getting in-bounds from multibillion-dollar managers asking to launch products on Titan.”
The company plans to use its new capital toward continuing to build out its underlying platform and suite of investment products as well as hiring. It currently has about 30 employees, up from seven a year ago. Percoco expects that Titan will have 100 employees by this time next year.
A16z general partner Anish Acharya said that since meeting the Titan team last year, his firm has “consistently been impressed” by Titan’s product vision, execution and team.
“If we pull back and look at trends happening in consumer investing, we can see that younger generations are embracing more risk in investing, that they demand easy to navigate, mobile-first interfaces and transparency from their banks, and that they want to deeply understand how their money is being invested and participate in the learnings from that process,” said Acharya, who will be joining Titan’s board as part of the financing.
In his view, Titan sits at an “interesting intersection” between passive robo-advisors and active stock-pricing, “allowing their customers to ride shotgun alongside some of the best fund managers in the world, thus achieving the returns and knowledge of stock picking without having to make the decisions themselves.”
Edtech entrepreneurs are using their moment in the sun to rethink the structures and impact of nearly every aspect of modern-day learning, from the art of testing to the reality of information retention. Yet, the most popular product up for grabs may just be a seemingly simple one: the almighty tutoring session. Numerade, an edtech founded in 2018, just had its take on scalable, high-quality tutoring sessions valued at $100 million.
Numerade sells subscriptions to short-form videos that explain how certain equations and experiments work, and then uses an algorithm to make those explainers better suited to a learner’s comprehension style. Per CEO and co-founder Nhon Ma, the startup’s focus on asynchronous, contextualized content will make it easier to scale high-quality tutoring at an affordable price.
“Real teaching involves sight and sound, but also the context of how something is delivered in the vernacular of how a student actually learns,” Ma said. And he wants Numerade to be a platform that goes beyond the robotic Q&A and step-by-step answer platforms such as Wolfram Alpha, and actually integrates science into how solutions are communicated to users.
Today, the company announced that it has raised $26 million at a $100 million valuation in a round including investors such as IDG Capital, General Catalyst, Mucker Capital, Kapor Capital, Interplay Ventures, and strategic investors such as Margo Georgiadis, the former CEO of Ancestry, Khaled Helioui, the former CEO of Bigpoint Games and angel investor in Uber, and Taavet Hinrikus, founder of Wise.
“There are supply and demand mechanics inherent to synchronous tutoring,” Ma said. He explained how the best tutors have limited time, may demand premiums, and overall lead to a constraint on the supply side of marketplaces. Group tutoring has been an option employed by some companies, pairing multiple students to one tutor for efficiency saake, but he thinks that it is “really outdated, and actually decreases the quality of tutoring.”
With Numerade avoiding both live learning and Wolfram-Alpha style explainers that just give the answer to students, the company has turned to a third option: videos. Videos are not new to edtech, but currently majorly reside in massive open online course providers such as Coursera or Udemy, or ‘edutainment’ platforms like MasterClass and Outschool. Numerade thinks that teacher-led or educator-guided videos can be built around a specific problem within Chapter 2 of Fundamentals of Physics.
Student learning from Numerade videos. Image Credits: Numerade
The company has three main products: bootcamp videos for foundational knowledge, step-by-step videos that turn that knowledge into a skill and focus on sequence, and finally, quizzes that assess how much of the aforementioned information was retained.
The true moonshot in the startup, though, is the algorithm that decides which students see which videos. When explaining how the algorithm works, Ma used words like “deep learning” and “computer vision” and “ontology” but mostly the algorithm boils down to this: it wants to bring TikTok-level specificity to educational videos, using users’ historical actions to better push certain content that fits their learning style.
For example, the startup believes that offering step-by-step videos help the brain understand patterns, diversity of problems, and eventually better understand solutions. The algorithm mostly shows up in Numerade quizzes, which will see how a student performs on a topic and then input those results back into the model to assumedly better cater a new series of bootcamps and questions.
“To help a student grow and learn, our model first understands their strengths and weaknesses and then surfaces relevant conceptual, practical, and assessment content to build their subject knowledge. The algorithm can parse structured data from videos and provide different teaching styles to suit the needs of all students,” he said.
As of now, Numerade’s algorithm appears preliminary. Users need to be paid subscribers and have a sufficient usage history in order to start benefiting from more targeted content. Even so, it’s unclear how the algorithm leads to different pedagogical content to students beyond resurfacing concepts that a student erred on in a previous quiz.
Numerade’s moonshot is built on an equally ambitious premise: that students want to learn concepts, not just Google for the fastest answer so they can finish procrastinated homework. Ma explained how engagement time on Numerade videos can be somewhere from double to triple the video’s entire length, which means that students are interacting with the content beyond just skipping over to the answer
Numerade isn’t alone in trying to take on Wolfram Alpha. Over the past year, edtech unicorns like Quizlet and Course Hero have invested heavily in AI-powered chatbots and live calculators, the latter largely through acquisitions of companies such as Numerade. These platforms are rallying around the idea that tech-powered tutoring sessions should prioritize speed and simplicity, instead of relationship-building and time. In other words, maybe students won’t go to a tutor once a week for math, but they will go to a platform that can methodically explain an answer at midnight, hours before their precalculus exam.
Despite its somewhat early-stage algorithm innovation and heavy-weigh competition, Numerade’s fresh venture backing and ability to bring in revenue is promising. While declining to divulge specifics, Ma said that the company is “quickly tracking” to eight figures in ARR, meaning it’s making at least $10 million in annual revenue from its current subscriber base. He sees perspective as Numerade’s biggest competitive advantage.
“A common criticism of commercial STEM education is that it’s too modular — textbooks teach physics as stand-alone,” Ma said. “Our algorithm does not, instead it treats STEM as an interlocking ecosystem; concepts in math, physics, chemistry, and biology are omnidirectionally related.”
Sundae, a residential real estate marketplace that pairs sellers of dated or damaged property with potential buyers, has raised $80 million in a Series C funding round co-led by Fifth Wall and General Global Capital.
QED Investors, Wellington Management, Susa Ventures, Founders Fund, First American Financial, Prudence Holdings, Crossover VC, Intersect Capital, Gaingels and Oberndorf Ventures also participated in the financing. The round marks San Francisco-based Sundae’s third financing in a 13-month time frame, bringing its total raised since its August 2018 inception to $135 million.
The San Francisco-based company declined to reveal at what valuation its Series C was raised. It also declined to provide hard revenue figures, saying only that it saw a 600% year-over-year increase in revenue from June 2020 to June 2021.
The startup aims to help people who need to sell dated or “damaged” properties for a variety of reasons — such as job loss, illness or divorce. In some cases, according to CEO and co-founder Josh Stech, such vulnerable sellers get taken advantage of by “predatory fix and flippers” seeking to capitalize on their misfortune.
Since sellers in these situations don’t typically have the funds to fix up their properties before selling, Sundae lists the property for them on its platform – serving as an intermediary between sellers and investors. There, it is visible to about 2,600 qualified off-market buyers.
The company essentially aims to aggregate demand from “fix and flippers,” who use the marketplace to bid against each other for distressed properties. If the seller accepts and an inspection is completed, the company offers a $10,000 cash advance before closing to help homeowners with moving costs or other expenses.
“Our goal is to displace wholesalers who exploit desperate or uninformed sellers and lock them into a contract which they turn around and assign to a property investor at a steep profit,” Stech said. “The tens of thousands of dollars in lost equity that goes to a wholesaler could mean the difference between paying off debts, or having enough money to retire.”
Sundae claims that on average, sellers receive 10 offers within three days on its marketplace.
Since its launch in January 2019, the startup has slowly been expanding its marketplace geographically. It went from operating in four markets in California at the end of last year to now operating in 14 markets across Florida, Colorado, Georgia, Texas and Utah.
Sundae makes money by charging buyers in its investor marketplace a fee when it “assigns” them a property.
In the first quarter of this year, the startup launched a dedicated online marketplace for investors, where they can view properties and submit offers. Once an investor signs up to join the marketplace, they can access the full inventory of properties, including information such as photos, floor plan, 3D walkthrough and a third-party inspection report.
Looking ahead, the company plans to use its new capital to expand to new markets, invest in its platform and “build brand awareness.” It also, of course, plans to boost its current headcount of 180 mostly remote employees.
Vik Chawla, a partner at Fifth Wall, believes Sundae is serving a segment of the residential real estate market that has historically been overlooked.
“Their marketplace model simultaneously solves a crucial pain point for sellers by disrupting the wholesale industry, while delivering a platform that property investors can count on for reliable investment opportunities,” he said.
The company last raised $36 million in a Series B funding round in December 2020.
Interestingly, a slew of angel investors — including a number of athletes and celebrities — also put money in the company’s latest round, including: actor Will Smith, DJ Kygo, three-time NFL Super Bowl champion Richard Seymour of 93 Ventures, NFL All-Pro DK Metcalf of the Seattle Seahawks, Matt Chapman of the Oakland A’s, Alex Caruso of the Los Angeles Lakers, Aaron Gordon of the Denver Nuggets, Solomon Hill of the Atlanta Hawks, Kelly Olynyk of the Houston Rockets, NBA All-Star Isaiah Thomas, three-time NBA Champion & Gold Medalist Klay Thompson of the Golden State Warriors, Hassan Whiteside of the Sacramento Kings, Andrew Wiggins of the Golden State Warriors and 2020 U.S. Soccer Player of the Year and Juventus midfielder, Weston McKennie.
The Biden administration has formally accused China of the mass-hacking of Microsoft Exchange servers earlier this year, which prompted the FBI to intervene as concerns rose that the hacks could lead to widespread destruction.
The mass-hacking campaign targeted Microsoft Exchange email servers with four previously undiscovered vulnerabilities that allowed the hackers — which Microsoft already attributed to a China-backed group of hackers called Hafnium — to steal email mailboxes and address books from tens of thousands of organizations around the United States.
Microsoft released patches to fix the vulnerabilities, but the patches did not remove any backdoor code left behind by the hackers that might be used again for easy access to a hacked server. That prompted the FBI to secure a first-of-its-kind court order to effectively hack into the remaining hundreds of U.S.-based Exchange servers to remove the backdoor code. Computer incident response teams in countries around the world responded similarly by trying to notify organizations in their countries that were also affected by the attack.
In a statement out Monday, the Biden administration said the attack, launched by hackers backed by China’s Ministry of State Security, resulted in “significant remediation costs for its mostly private sector victims.”
“We have raised our concerns about both this incident and the [People’s Republic of China’s] broader malicious cyber activity with senior PRC Government officials, making clear that the PRC’s actions threaten security, confidence, and stability in cyberspace,” the statement read.
The National Security Agency also released details of the attacks to help network defenders identify potential routes of compromise. The Chinese government has repeatedly denied claims of state-backed or sponsored hacking.
The Biden administration also blamed China’s Ministry of State Security for contracting with criminal hackers to conduct unsanctioned operations, like ransomware attacks, “for their own personal profit.” The government said it was aware that China-backed hackers have demanded millions of dollars in ransom demands against hacked companies. Last year, the Justice Department charged two Chinese spies for their role in a global hacking campaign that saw prosecutors accuse the hackers of operating for personal gain.
Although the U.S. has publicly engaged the Kremlin to try to stop giving ransomware gangs safe harbor from operating from within Russia’s borders, the U.S. has not previously accused Beijing of launching or being involved with ransomware attacks.
“The PRC’s unwillingness to address criminal activity by contract hackers harms governments, businesses, and critical infrastructure operators through billions of dollars in lost intellectual property, proprietary information, ransom payments, and mitigation efforts,” said Monday’s statement.
The statement also said that the China-backed hackers engaged in extortion and cryptojacking, a way of forcing a computer to run code that uses its computing resources to mine cryptocurrency, for financial gain.
The Justice Department also announced fresh charges against four China-backed hackers working for the Ministry of State Security, which U.S. prosecutors said were engaged in efforts to steal intellectual property and infectious disease research into Ebola, HIV and AIDS, and MERS against victims based in the U.S., Norway, Switzerland and the United Kingdom by using a front company to hide their operations.
“The breadth and duration of China’s hacking campaigns, including these efforts targeting a dozen countries across sectors ranging from healthcare and biomedical research to aviation and defense, remind us that no country or industry is safe. Today’s international condemnation shows that the world wants fair rules, where countries invest in innovation, not theft,” said deputy attorney general Lisa Monaco.
The consumer protection association umbrella group, the Beuc, said today that together with eight of its member organizations it’s filed a complaint with the European Commission and with the European network of consumer authorities.
“The complaint is first due to the persistent, recurrent and intrusive notifications pushing users to accept WhatsApp’s policy updates,” it wrote in a press release.
“The content of these notifications, their nature, timing and recurrence put an undue pressure on users and impair their freedom of choice. As such, they are a breach of the EU Directive on Unfair Commercial Practices.”
After earlier telling users that notifications about the need to accept the new policy would become persistent, interfering with their ability to use the service, WhatsApp later rowed back from its own draconian deadline.
However the app continues to bug users to accept the update — with no option not to do so (users can close the policy prompt but are unable to decline the new terms or stop the app continuing to pop-up a screen asking them to accept the update).
“In addition, the complaint highlights the opacity of the new terms and the fact that WhatsApp has failed to explain in plain and intelligible language the nature of the changes,” the Beuc went on. “It is basically impossible for consumers to get a clear understanding of what consequences WhatsApp’s changes entail for their privacy, particularly in relation to the transfer of their personal data to Facebook and other third parties. This ambiguity amounts to a breach of EU consumer law which obliges companies to use clear and transparent contract terms and commercial communications.”
The organization pointed out that WhatsApp’s policy updates remain under scrutiny by privacy regulations in Europe — which it argues is another factor that makes Facebook’s aggressive attempts to push the policy on users highly inappropriate.
And while this consumer-law focused complaint is separate to the privacy issues the Beuc also flags — which are being investigated by EU data protection authorities (DPAs) — it has called on those regulators to speed up their investigations, adding: “We urge the European network of consumer authorities and the network of data protection authorities to work in close cooperation on these issues.”
The Beuc has produced a report setting out its concerns about the WhatsApp ToS change in more detail — where it hits out at the “opacity” of the new policies, further asserting:
“WhatsApp remains very vague about the sections it has removed and the ones it has added. It is up to users to seek out this information by themselves. Ultimately, it is almost impossible for users to clearly understand what is new and what has been amended. The opacity of the new policies is in breach of Article 5 of the UCTD [Unfair Contract Terms Directive] and is also a misleading and unfair practice prohibited under Article 5 and 6 of the UCPD [Unfair Commercial Practices Directive].”
Reached for comment on the consumer complaint, a WhatsApp spokesperson told us:
“Beuc’s action is based on a misunderstanding of the purpose and effect of the update to our terms of service. Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. The update does not expand our ability to share data with Facebook, and does not impact the privacy of your messages with friends or family, wherever they are in the world. We would welcome an opportunity to explain the update to Beuc and to clarify what it means for people.”
The Commission was also contacted for comment on the Beuc’s complaint — we’ll update this report if we get a response.
The complaint is just the latest pushback in Europe over the controversial terms change by Facebook-owned WhatsApp — which triggered a privacy warning from Italy back in January, followed by an urgency procedure in Germany in May when Hamburg’s DPA banned the company from processing additional WhatsApp user data.
Although, earlier this year, Facebook’s lead data regulator in the EU, Ireland’s Data Protection Commission, appeared to accept Facebook’s reassurances that the ToS changes do not affect users in the region.
German DPAs were less happy, though. And Hamburg invoked emergency powers allowed for in the General Data Protection Regulation (GDPR) in a bid to circumvent a mechanism in the regulation that (otherwise) funnels cross-border complaints and concerns via a lead regulator — typically where a data controller has their regional base (in Facebook/WhatsApp’s case that’s Ireland).
Such emergency procedures are time-limited to three months. But the European Data Protection Board (EDPB) confirmed today that its plenary meeting will discuss the Hamburg DPA’s request for it to make an urgent binding decision — which could see the Hamburg DPA’s intervention set on a more lasting footing, depending upon what the EDPB decides.
In the meanwhile, calls for Europe’s regulators to work together to better tackle the challenges posed by platform power are growing, with a number of regional competition authorities and privacy regulators actively taking steps to dial up their joint working — in a bid to ensure that expertise across distinct areas of law doesn’t stay siloed and, thereby, risk disjointed enforcement, with conflicting and contradictory outcomes for Internet users.
There seems to be a growing understanding on both sides of the Atlantic for a joined up approach to regulating platform power and ensuring powerful platforms don’t simply get let off the hook.
Peter Boyce II has left General Catalyst to start his own firm, a little over a year after the venture capital firm promoted him to partner. His new firm is called Stellation Capital, and filings indicate that he is looking to raise up to $40 million for the debut investment vehicle. Sources say that most, if perhaps not all, of that total has been closed since the initial SEC filing in April.
Boyce declined to comment for this story. It’s been a quiet transition for the investor; his LinkedIn and Twitter have not been updated to indicate his new job title, but his personal website indicates the new gig. For an investor to leave a prominent venture capital firm after an eight-year tenure to raise dozens of millions of his own — and somehow do so quietly and with minimal coverage — might be a result of the funding frenzy and consequential numbness to yet another filing.
Boyce joined GC in 2013 and led investments in Ro, Macro, towerIQ and Atom. He’s also supported portfolio companies such as Giphy, Jet.com and Circle. Beyond GC, Boyce has experience co-founding and running Rough Draft Ventures, a program that helps incubate startups founded by students, recent graduates as well as promote entrepreneurship on campuses.
Stellation Capital will leverage his work and name into early-stage investments. The name of the firm, per its website, is derived from the Latin root of stella, which means star. The name also describes “the process of extending a polygon in new dimensions to form a new shape…just like we’re extending the potential of a founder into new possibilities.”
It’s unclear what the firm’s check size and cadence will be, but it did say it wants to back successful companies at “their earliest stages” on the website.
A group of 37 attorneys general filed a second major multi-state antitrust lawsuit against Google Wednesday, accusing the company of abusing its market power to stifle competitors and forcing consumers into in-app payments that grant the company a hefty cut.
New York Attorney General Letitia James is co-leading the suit alongside the Tennessee, North Carolina and Utah attorneys general. The bipartisan coalition represents 36 U.S. states, including California, Florida, Massachusetts, New Jersey, New Hampshire, Colorado and Washington, as well as the District of Columbia.
“Through its illegal conduct, the company has ensured that hundreds of millions of Android users turn to Google, and only Google, for the millions of applications they may choose to download to their phones and tablets,” James said in a press release. “Worse yet, Google is squeezing the lifeblood out of millions of small businesses that are only seeking to compete.”
In December, 35 states filed a separate antitrust suit against Google, alleging that the company engaged in illegal behavior to maintain a monopoly on the search business. The Justice Department filed its own antitrust case focused on search last October.
In the new lawsuit, embedded below, the bipartisan coalition of states allege that Google uses “misleading” security warnings to keep consumers and developers within its walled app garden, the Google Play store. But the fees that Google collects from Android app developers are likely the meat of the case.
“Not only has Google acted unlawfully to block potential rivals from competing with its Google Play Store, it has profited by improperly locking app developers and consumers into its own payment processing system and then charging high fees,” District of Columbia Attorney General Karl Racine said.
Like Apple, Google herds all app payment processing into its own service, Google Play Billing, and reaps the rewards: a 30 percent cut of all payments. Much of the criticism here is a case that could — and likely will — be made against Apple, which exerts even more control over its own app ecosystem. Google doesn’t have an iMessage equivalent exclusive app that keeps users locked in in quite the same way.
While the lawsuit discusses Google’s “monopoly power” in the app marketplace, the elephant in the room is Apple — Google’s thriving direct competitor in the mobile software space. The lawsuit argues that consumers face pressure to stay locked into the Android ecosystem, but on the Android side at least, much of that is ultimately familiarity and sunk costs. The argument on the Apple side of the equation here is likely much stronger.
The din over tech giants squeezing app developers with high mobile payment fees is just getting louder. The new multi-state lawsuit is the latest beat, but the topic has been white hot since Epic took Apple to court over its desire to bypass Apple’s fees by accepting mobile payments outside the App Store. When Epic set up a workaround, Apple kicked it out of the App Store and Epic Games v. Apple was born.
The Justice Department is reportedly already interested in Apple’s own app store practices, along with many state AGs who could launch a separate suit against the company at any time.
A code repository used by the New York state government’s IT department was left exposed on the internet, allowing anyone to access the projects inside, some of which contained secret keys and passwords associated with state government systems.
Organizations use GitLab to collaboratively develop and store their source code — as well as the secret keys, tokens and passwords needed for the projects to work — on servers that they control. But the exposed server was accessible from the internet and configured so that anyone from outside the organization could create a user account and log in unimpeded, SpiderSilk ‘s chief security officer Mossab Hussin told TechCrunch.
When TechCrunch visited the GitLab server, the login page showed it was accepting new user accounts. It’s not known exactly how long the GitLab server was accessible in this way, but historic records from Shodan, a search engine for exposed devices and databases, shows the GitLab was first detected on the internet on March 18.
SpiderSilk shared several screenshots showing that the GitLab server contained secret keys and passwords associated with servers and databases belonging to New York State’s Office of Information Technology Services. Fearing the exposed server could be maliciously accessed or tampered with, the startup asked for help in disclosing the security lapse to the state.
TechCrunch alerted the New York governor’s office to the exposure a short time after the server was found. Several emails to the governor’s office with details of the exposed GitLab server were opened but were not responded to. The server went offline on Monday afternoon.
Scot Reif, a spokesperson for New York State’s Office of Information Technology Services, said the server was “a test box set up by a vendor, there is no data whatsoever, and it has already been decommissioned by ITS.” (Reif declared his response “on background” and attributable to a state official, which would require both parties agree to the terms in advance, but we are printing the reply as we were not given the opportunity to reject the terms.)
When asked, Reif would not say who the vendor was or if the passwords on the server were changed. Several projects on the server were marked “prod,” or common shorthand for “production,” a term for servers that are actively use. Reif also would not say if the incident was reported to the state’s Attorney General’s office. When reached, a spokesperson for the Attorney General did not comment by press time.
TechCrunch understands the vendor is Indotronix-Avani, a New York-based company with offices in India, and owned by venture capital firm Nigama Ventures. Several screenshots show some of the GitLab projects were modified by a project manager at Indotronix-Avani. The vendor’s website touts New York State on its website, along with other government customers, including the U.S. State Department and the U.S. Department of Defense.
Indotronix-Avani spokesperson Mark Edmonds did not respond to requests for comment.
Illumio, a self-styled zero trust unicorn, has closed a $225 million Series F funding round at a $2.75 billion valuation.
The round was led by Thoma Bravo, which recently bought cybersecurity vendor Proofpoint by $12.3 billion, and supported by Franklin Templeton, Hamilton Lane, and Blue Owl Capital.
The round lands more than two years after Illumio’s Series E funding round in which it raised $65 million, and fueled speculation of an impending IPO. The company’s founder, Andrew Rubin, still isn’t ready to be pressed on whether the company plans to go public, though he told TechCrunch: “If we do our job right, and if we make our customers successful, I’d like to think that would be part of our journey.”
Illumio’s latest funding round is well-timed. Not only does it come amid a huge rise in successful cyberattacks which show that some of the more traditional cybersecurity measures are no longer working, from the SolarWinds hack in early 2020 to the more recent attack on Colonial Pipeline, but it also comes just weeks after President Joe Biden issued an executive order pushing federal agencies to implement significant cybersecurity initiatives, including a zero trust architecture.
“And just a couple of weeks ago, Anne Neuberger [deputy national security adviser for cybersecurity] put out a memo on White House stationary to all of corporate America saying we’re living through a ransomware pandemic, and here’s six things that we’re imploring you to do,” Rubin says. “One of them was to segment your network.”
Illumio focuses on protecting data centers and cloud networks through something it calls micro-segmentation, which it claims makes it easier to manage and guard against potential breaches, as well as to contain a breach if one occurs. This zero trust approach to security — a concept centered on the belief that businesses should not automatically trust anything inside or outside its perimeters — has never been more important for organizations, according to Illumio.
“Cyber events are no longer constrained to cyber space,” says Rubin. “That’s why people are finally saying that, after 30 years of relying solely on detection to keep us safe, we cannot rely on it 100% of the time. Zero trust is now becoming the mantra.”
Illumio tells TechCrunch it will use the newly raised funds to make a “huge” investment in its field operations and channel partner network, and to invest in innovation, engineering and its product.
The late-stage startup, which was founded in 2013 and is based in California, says more than 10% of Fortune 100 companies — including Morgan Stanley, BNP Paribas SA and Salesforce — now use its technology to protect their data centers, networks and other applications. It saw 100% international growth during the pandemic, and says it’s also broadening its customer base across more industries.
The company has raised more now raised more $550 million from investors include Andreessen Horowitz, General Catalyst and Formation 8.
While every food delivery company is trying to get an edge on its rivals with discount codes, faster service, and a turn into the realm of spooky with ghost kitchens and dark stores, a startup built on a lighter, social concept — letting people see what their friends are chomping on, making it possible to order food and drinks for each other and group order, with buyers picking it all up for themselves — has just raised a substantial Series B and says that it is already profitable in a number of markets.
Snackpass, which describes itself as a “food meets friends” — essentially a social commerce platform for ordering from restaurants, with “snack,” the CEO tells me, of having a double meaning of eating (of course), and a flirtatious reference to a cutie pie — has picked up a $70 million, a super-sized Series B that it will be using to continue expanding to more markets in the U.S.
Conceived four years ago while Kevin Tan, the CEO who co-founded the company with Jamie Marshall, was still a student at Yale studying physics, Snackpass has grown by remaining true to its higher-ed roots. The startup now has 500,000 users across 13 college towns, and has seen its growth explode 7x in the last three months alone. This round values the startup at over $400 million.
This latest tranche of funding is coming from an interesting group of investors. Led by Craft Ventures, it also includes Andreessen Horowitz (which led its $21 million Series A), General Catalyst, Y Combinator, and a long list of individual backers that speaks to the attention Snackpass is getting and the place it’s carving out for itself as a go-to food platform for millennials and younger users.
That list includes AirAngels, the Airbnb alumni investor syndicate; Bastian Lehmann of the Uber-acquired delivery giant Postmates (et tu, Bastian?); David Grutman, a hospitality entrepreneur; Draymond Green of the Golden State Warriors; Gaingels; HartBeat Ventures, Kevin Hart’s venture fund; musician celebs the Jonas Brothers; Shrug Capital (the VC that says it’s interested in consumer startups that are actually interesting to “non-tech” audiences); Pags Group, the family office of the Boston Celtics co-owner Stephen Pagliuca; hip DJ Steve Aoki; Turner Novak of Banana Capital; William Barnes of Moving Capital; and the Uber alumni investor syndicate.
The vast majority of food-ordering platforms these days are focused on delivery and, in many cases, ways of getting an edge over other platforms in executing on that — a push that often comes at the expense of margins than are thinner than a Roman pizza. Snackpass’s big breakthrough, if you could call it that, was to simply dial back from that one-upmanship, moving away from that premise altogether, aiming to disrupt something much more mundane: the queue.
Tan said Snackpass asked its users what they would do if they weren’t using the app, and they said, “Oh, I just stand in line to order,” he told me in an interview.
“The market share right now is owned by people standing in line at the register, and placing their order. Our vision is that in five years that will no longer exist, like, there will be no more registers. We don’t think it makes any sense.”
He notes that for those who really want delivery, people can opt for that, too — Snackpass integrates with delivery services like UberEats to fulfill that — but 90% of the orders on Snackpass are pickup, meaning that not only does the company then not have to deal with its own fleets of delivery people, and the infrastructure of that, but the operating costs to provide that are also not there.
It turns out that actually a lot of young people seem happy to pop out to get something nice to eat. It means they get to socialise, and take a selfie with their food or drink (boba tea figures strongly) at the venue where it’s being bought. It becomes an experience.
It’s also where the market is in another sense. “What people don’t realize is delivery is only 8% of the restaurant industry,” Tan told me. “And while it’s very much competed for by like big companies, and it’s a huge market, the restaurant industry, is like, much bigger, it’s $800 billion. And 90% of that purchasing is still offline,” he continued, referring to the many people who just queue up, order, buy, and leave. “It’s anonymous, and it’s on the verge of disruption. And we’re focused on that much bigger blue ocean.”
Its formula seems to be working with its target users. Tan said that the service has 80% penetration with students in the markets where it has launched. The average customer orders four and a half times a month, with some customers ordering every day. “You can actually see that it’s like, five to ten times more engagement than the delivery platforms, like UberEats.”
The company’s commissions vary and start at 7% and it’s current suite includes online ordering, self-service kiosks, digital menus, marketing services, and a customer referral program. It’s already profitable (in certain markets) but as it continues to grow (and maybe extend to other demographics) you can imagine it adding and expanding on all of these.
There is something about Snackpass that reminds me a lot of Snapchat, not just that the names have a similar ring to them, and not just that they have resonated with college-aged users (and not just that they both squarely target them). It’s something of the whimsy of the app, and how it takes a light touch in its approach to do something that might otherwise feel cumbersome, or mundane, or what, basically, older people do.
Right now, there isn’t much of a social “user graph” per se on Snackpass, nor does it integrate particularly deeply with any specific social apps, but you could imagine a partnership there down the line, especially considering that Snap is getting a whole lot more involved with commerce now.
“In building a social experience around food through shared rewards, gifting, and a social activity feed, Snackpass has created a dynamic and attractive restaurant ordering system,” says Bryan Rosenblatt, partner, Craft Ventures, in a statement. “The growth of its marketplace and virality of the product coupled with Snackpass’ outstanding team and vision, make it the ultimate solution for consumers and businesses alike. We are thrilled to help take Snackpass to the next level with this latest round of funding.”
Updated to clarify that Snackpass is profitable in some but not all markets; to correct the spelling and names of some of the investors; and to note that Snackpass currently does not work with DoorDash.
An international coalition of consumer protection, digital and civil rights organizations and data protection experts has added its voice to growing calls for a ban on what’s been billed as “surveillance-based advertising”.
The objection is to a form of digital advertising that relies upon a massive apparatus of background data processing which sucks in information about individuals, as they browse and use services, to create profiles which are used to determine which ads to serve (via multi-participant processes like the high speed auctions known as real-time bidding).
The EU’s lead data protection supervisor previously called for a ban on targeted advertising which relies upon pervasive tracking — warning over a multitude of associated rights risks.
Last fall the EU parliament also urged tighter rules on behavioral ads.
Back in March, a US coalition of privacy, consumer, competition and civil rights groups also took collective aim at microtargeting. So pressure is growing on lawmakers on both sides of the Atlantic to tackle exploitative adtech as consensus builds over the damage associated with mass surveillance-based manipulation.
At the same time, momentum is clearly building for pro-privacy consumer tech and services — showing the rising store being placed by users and innovators on business models that respect people’s data.
The growing uptake of such services underlines how alternative, rights-respecting digital business models are not only possible (and accessible, with many freemium offerings) but increasingly popular.
In an open letter addressing EU and US policymakers, the international coalition — which is comprised of 55 organizations and more than 20 experts including groups like Privacy International, the Open Rights Group, the Center for Digital Democracy, the New Economics Foundation, Beuc, Edri and Fairplay — urges legislative action, calling for a ban on ads that rely on “systematic commercial surveillance” of Internet users in order to serve what Facebook founder Mark Zuckerberg likes, euphemistically, to refer to as ‘relevant ads’.
The problem with Zuckerberg’s (self-serving) framing is that, as the coalition points out, the vast majority of consumers don’t actually want to be spied upon to be served with these creepy ads.
Any claimed ‘relevance’ is irrelevant to consumers who experience ad-stalking as creepy and unpleasant. (And just imagine how the average Internet user would feel if they could peek behind the adtech curtain — and see the vast databases where people are profiled at scale so their attention can be sliced and diced for commercial interests and sold to the highest bidder).
The coalition points to a report examining consumer attitudes to surveillance-based advertising, prepared by one of the letter’s signatories (the Norwegian Consumer Council; NCC), which found that only one in ten people are positive about commercial actors collecting information about them online — and only one in five think ads based on personal information are okay.
1/4 80-90% of people online don't want to be spied on for 'more relevant ads,' finds @Forbrukerradet's report.
— EDRi (@edri) June 23, 2021
A full third of respondents to the survey were “very negative” about microtargeted ads — while almost half think advertisers should not be able to target ads based on any form of personal information.
The report also highlights a sense of impotence among consumers when they go online, with six out of ten respondents feeling that they have no choice but to give up information about themselves.
That finding should be particularly concerning for EU policymakers as the bloc’s data protection framework is supposed to provide citizens with a suite of rights related to their personal data that should protect them against being strong-armed to hand over info — including stipulating that if a data controller intends to rely on user consent to process data then consent must be informed, specific and freely given; it can’t be stolen, strong-armed or sneaked through using dark patterns. (Although that remains all too often the case.)
Forced consent is not legal under EU law — yet, per the NCC’s European survey, a majority of respondents feel they have no choice but to be creeped on when they use the Internet.
That in turn points to an ongoing EU enforcement failure over major adtech-related complaints, scores of which have been filed in recent years under the General Data Protection Regulation (GDPR) — some of which are now over three years old (yet still haven’t resulted in any action against rule-breakers).
Over the past couple of years EU lawmakers have acknowledged problems with patchy GDPR enforcement — and it’s interesting to note that the Commission suggested some alternative enforcement structures in its recent digital regulation proposals, such as for oversight of very large online platforms in the Digital Services Act (DSA).
In the letter, the coalition suggests the DSA as the ideal legislative vehicle to contain a ban on surveillance-based ads.
Negotiations to shape a final proposal which EU institutions will need to vote on remain ongoing — but it’s possible the EU parliament could pick up the baton to push for a ban on surveillance ads. It has the power to amend the Commission’s legislative proposals and its approval is needed for draft laws to be adopted. So there’s plenty still to play for.
“In the US, we urge legislators to enact comprehensive privacy legislation,” the coalition adds.
The coalition is backing up its call for a ban on surveillance-based advertising with another report (also by the NCC) which lays out the case against microtargeting — summarizing the raft of concerns that have come to be attached to manipulative ads as awareness of the adtech industry’s vast, background people-profiling and data trading has grown.
Listed concerns not only focus on how privacy-stripping practices are horrible for individual consumers (enabling the manipulation, discrimination and exploitation of individuals and vulnerable groups) but also flag the damage to digital competition as a result of adtech platforms and data brokers intermediating and cannibalizing publishers’ revenues — eroding, for example, the ability of professional journalism to sustain itself and creating the conditions where ad fraud has been able to flourish.
Another contention is that the overall health of democratic societies is put at risk by surveillance-based advertising — as the apparatus and incentives fuel the amplification of misinformation and create security risks, and even national security risks. (Strong and independent journalism is also, of course, a core plank of a healthy democracy.)
“This harms consumers and businesses, and can undermine the cornerstones of democracy,” the coalition warns.
“Although we recognize that advertising is an important source of revenue for content creators and publishers online, this does not justify the massive commercial surveillance systems set up in attempts to ‘show the right ad to the right people’,” the letter goes on. “Other forms of advertising technologies exist, which do not depend on spying on consumers, and cases have shown that such alternative models can be implemented without significantly affecting revenue.
“There is no fair trade-off in the current surveillance-based advertising system. We encourage you to take a stand and consider a ban of surveillance-based advertising as part of the Digital Services Act in the EU, and the for U.S. to enact a long overdue federal privacy law.”
The letter is just the latest salvo against ‘toxic adtech’. And advertising giants like Facebook and Google have — for several years now — seen the pro-privacy writing on the wall.
Hence Facebook’s claimed ‘pivot to privacy‘; its plan to lock in its first party data advantage (by merging the infrastructure of different messaging products); and its keen interest in crypto.
It’s also why Google has been working on a stack of alternative adtech that it wants to replace third party tracking cookies. Although its proposed replacement — the so-called ‘Privacy Sandbox‘ — would still enable groups of Internet users to be opaquely clustered by its algorithms in ‘interest’ buckets for ad targeting purposes which still doesn’t look great for Internet users’ rights either. (And concerns have been raised on the competition front too.)
Where its ‘Sandbox’ proposal is concerned, Google may well be factoring in the possibility of legislation that outlaws — or, at least, more tightly controls — microtargeting. And it’s therefore trying to race ahead with developing alternative adtech that would have much the same targeting potency (maintaining its market power) but, by swapping out individuals for cohorts of web users, could potentially sidestep a ban on ‘microtargeting’ technicalities.
Legislators addressing this issue will therefore need to be smart in how they draft any laws intended to tackle the damage caused by surveillance-based advertising.
Certainly they will if they want to prevent the same old small- and large-scale manipulation abuses from being perpetuated.
The NCC’s report points to what it dubs as “good alternatives” for digital advertising models which don’t depend on the systematic surveillance of consumers to function. And which — it also argues — provide advertisers and publishers with “more oversight and control over where ads are displayed and which ads are being shown”.
The problem of ad fraud is certainly massively underreported. But, well, it’s instructive to recall how often Facebook has had to ‘fess up to problems with self reported ad metrics…
“It is possible to sell advertising space without basing it on intimate details about consumers. Solutions already exist to show ads in relevant contexts, or where consumers self-report what ads they want to see,” the NCC’s director of digital policy, Finn Myrstad, noted in a statement.
“A ban on surveillance-based advertising would also pave the way for a more transparent advertising marketplace, diminishing the need to share large parts of ad revenue with third parties such as data brokers. A level playing field would contribute to giving advertisers and content providers more control, and keep a larger share of the revenue.”
GM has launched a series of new subsidiaries in the past year tackling electrification, connectivity and even insurance — all part of the automaker’s aim to find value (and profits) beyond its traditional business of making, selling and financing vehicles. These startups, including numerous ones that will never make the cut, get their start under Vice President of Innovation Pam Fletcher’s watch.
Fletcher, who joined TechCrunch on June 9 at the virtual TC Sessions: Mobility 2021 event, runs a group of 170 people developing and launching startups with a total addressable market of about $1.3 trillion.
Today, about 19 companies are making their way through the incubator in hopes of joining recent GM startups like OnStar Guardian, OnStar Insurance, GM Defense and BrightDrop, the commercial electric vehicle delivery business that launched in January. Not everything will make it, Fletcher told the audience, noting “we add new things all the time.”
Launching any startup presents challenges. But launching multiple startups within a 113-year-old automaker that employs 155,000 people globally is another, more complex matter. The bar, which determines whether these startups are ever publicly launched, is specific and high. A GM startup has to be a new idea that can attract new customers and grow the total addressable market for the automaker, using existing assets and IP.
The 2010 Chevrolet Volt is a noteworthy moment on the GM timeline. The vehicle marked the company’s first commercial push into electrification since the 1990s EV1 program. Fletcher, who was the chief engineer of the Chevy Volt propulsion system from 2008 to 2011, noted that the Volt was the beginning of a change within the automaker that eventually led to other commercial products including the all-electric Chevy Bolt, the hands-free driver assistance system Super Cruise and its current work on autonomous vehicle development with its subsidiary Cruise.
I don’t know that the Volt was a root exactly of what we’re seeing today. But I think it was definitely the start of a groundswell of really looking at, how do we inject technology that customers are excited about and care about quickly? How do we engage them deeply in the process? … Which we’ve always done … just, I think there was a climate there where the appetite was so strong with a certain group of customers for the technology that it allowed us to get really a front row seat with them, which was game changing for those of us on the frontlines. And obviously, there have been many programs that have had that in their own ways, but you really see that accelerating now with the advent of everything we’re doing in electrification and autonomous and a portfolio that is just emerging even to the notion of applying some of these great technologies to our new full size, truck and SUV programs. So it’s really broad, based across the company, which is exciting. (Timestamp: 4:56)
Fletcher explained how working to commercialize new technology changed how the company interacted with customers.
With new technologies, one, you get to a new customer base sometimes. So, really understanding what that customer is looking like and putting them at the center of everything. Also, different technologies have different development processes and timelines and pipelines for activity. So, it really allowed us to start to think about how to approach each step of our product development and customer engagement differently. And the Volt was an interesting time too, because that was the advent of new social media was really starting to become much more popular. And so we were very connected with those customers and a great customer base that gave us tremendous feedback very directly, you know, through at the time, what was a new channel. (Timestamp: 3:50)
The European Data Protection Board (EDPB) published its final recommendations yesterday setting on guidance for making transfers of personal data to third countries to comply with EU data protection rules in light of last summer’s landmark CJEU ruling (aka Schrems II).
The long and short of these recommendations — which are fairly long; running to 48 pages — is that some data transfers to third countries will simply not be possible to (legally) carry out. Despite the continued existence of legal mechanisms that can, in theory, be used to make such transfers (like Standard Contractual Clauses; a transfer tool that was recently updated by the Commission).
However it’s up to the data controller to assess the viability of each transfer, on a case by case basis, to determine whether data can legally flow in that particular case. (Which may mean, for example, a business making complex assessments about foreign government surveillance regimes and how they impinge upon its specific operations.)
Companies that routinely take EU users’ data outside the bloc for processing in third countries (like the US), which do not have data adequacy arrangements with the EU, face substantial cost and challenge in attaining compliance — in a best case scenario.
Those that can’t apply viable ‘special measures’ to ensure transferred data is safe are duty bound to suspend data flows — with the risk, should they fail to do that, of being ordered to by a data protection authority (which could also apply additional sanctions).
One alternative option could be for such a firm to store and process EU users’ data locally — within the EU. But clearly that won’t be viable for every company.
Law firms are likely to be very happy with this outcome since there will be increased demand for legal advice as companies grapple with how to structure their data flows and adapt to a post-Schrems II world.
In some EU jurisdictions (such as Germany) data protection agencies are now actively carrying out compliance checks — so orders to suspend transfers are bound to follow.
While the European Data Protection Supervisor is busy scrutinizing EU institutions’ own use of US cloud services giants to see whether high level arrangements with tech giants like AWS and Microsoft pass muster or not.
Last summer the CJEU struck down the EU-US Privacy Shield — only a few years after the flagship adequacy arrangement was inked. The same core legal issues did for its predecessor, ‘Safe Harbor‘, though that had stood for some fifteen years. And since the demise of Privacy Shield the Commission has repeatedly warned there will be no quick fix replacement this time; nothing short of major reform of US surveillance law is likely to be required.
US and EU lawmakers remain in negotiations over a replacement EU-US data flows deal but a viable outcome that can stand up to legal challenge as the prior two agreements could not, may well require years of work, not months.
And that means EU-US data flows are facing legal uncertainty for the foreseeable future.
The UK, meanwhile, has just squeezed a data adequacy agreement out of the Commission — despite some loudly enunciated post-Brexit plans for regulatory divergence in the area of data protection.
If the UK follows through in ripping up key tenets of its inherited EU legal framework there’s a high chance it will also lose adequacy status in the coming years — meaning it too could face crippling barriers to EU data flows. (But for now it seems to have dodged that bullet.)
Data flows to other third countries that also lack an EU adequacy agreement — such as China and India — face the same ongoing legal uncertainty.
The backstory to the EU international data flows issues originates with a complaint — in the wake of NSA whistleblower Edward Snowden’s revelations about government mass surveillance programs, so more than seven years ago — made by the eponymous Max Schrems over what he argued were unsafe EU-US data flows.
Although his complaint was specifically targeted at Facebook’s business and called on the Irish Data Protection Commission (DPC) to use its enforcement powers and suspend Facebook’s EU-US data flows.
A regulatory dance of indecision followed which finally saw legal questions referred to Europe’s top court and — ultimately — the demise of the EU-US Privacy Shield. The CJEU ruling also put it beyond legal doubt that Member States’ DPAs must step in and act when they suspect data is flowing to a location where the information is at risk.
Following the Schrems II ruling, the DPC (finally) sent Facebook a preliminary order to suspend its EU-US data flows last fall. Facebook immediately challenged the order in the Irish courts — seeking to block the move. But that challenge failed. And Facebook’s EU-US data flows are now very much operating on borrowed time.
As one of the platform’s subject to Section 702 of the US’ FISA law, its options for applying ‘special measures’ to supplement its EU data transfers look, well, limited to say the least.
It can’t — for example — encrypt the data in a way that ensures it has no access to it (zero access encryption) since that’s not how Facebook’s advertising empire functions. And Schrems has previously suggested Facebook will have to federate its service — and store EU users’ information inside the EU — to fix its data transfer problem.
Safe to say, the costs and complexity of compliance for certain businesses like Facebook look massive.
But there will be compliance costs and complexity for thousands of businesses in the wake of the CJEU ruling.
Commenting on the EDPB’s adoption of final recommendations, chair Andrea Jelinek said: “The impact of Schrems II cannot be underestimated: Already international data flows are subject to much closer scrutiny from the supervisory authorities who are conducting investigations at their respective levels. The goal of the EDPB Recommendations is to guide exporters in lawfully transferring personal data to third countries while guaranteeing that the data transferred is afforded a level of protection essentially equivalent to that guaranteed within the European Economic Area.
“By clarifying some doubts expressed by stakeholders, and in particular the importance of examining the practices of public authorities in third countries, we want to make it easier for data exporters to know how to assess their transfers to third countries and to identify and implement effective supplementary measures where they are needed. The EDPB will continue considering the effects of the Schrems II ruling and the comments received from stakeholders in its future guidance.”
The EDPB put out earlier guidance on Schrems II compliance last year.
It said the main modifications between that earlier advice and its final recommendations include: “The emphasis on the importance of examining the practices of third country public authorities in the exporters’ legal assessment to determine whether the legislation and/or practices of the third country impinge — in practice — on the effectiveness of the Art. 46 GDPR transfer tool; the possibility that the exporter considers in its assessment the practical experience of the importer, among other elements and with certain caveats; and the clarification that the legislation of the third country of destination allowing its authorities to access the data transferred, even without the importer’s intervention, may also impinge on the effectiveness of the transfer tool”.
Commenting on the EDPB’s recommendations in a statement, law firm Linklaters dubbed the guidance “strict” — warning over the looming impact on businesses.
“There is little evidence of a pragmatic approach to these transfers and the EDPB seems entirely content if the conclusion is that the data must remain in the EU,” said Peter Church, a Counsel at the global law firm. “For example, before transferring personal data to third country (without adequate data protection laws) businesses must consider not only its law but how its law enforcement and national security agencies operate in practice. Given these activities are typically secretive and opaque, this type of analysis is likely to cost tens of thousands of euros and take time. It appears this analysis is needed even for relatively innocuous transfers.”
“It is not clear how SMEs can be expected to comply with these requirements,” he added. “Given we now operate in a globalised society the EDPB, like King Canute, should consider the practical limitations on its power. The guidance will not turn back the tides of data washing back and forth across the world, but many businesses will really struggle to comply with these new requirements.”
The funding round, said to be the largest Series A investment in cybersecurity history and one of the highest valuations for a bootstrapped company, was led by Insight Partners and General Atlantic, with additional investment from Cyberstarts, Geodesic, SYN Ventures, Vintage, and Artisanal Ventures.
Transmit Security said it has a pre-money valuation of $2.2 billion, and will use the new funds to expand its reach and investing in key global areas to grow the organization.
Ultimately, however, the funding round will help the company to accelerate its mission to help the world go passwordless. Organizations lose millions of dollars every year due to “inherently unsafe” password-based authentication, according to the startup; not only do weak passwords account for more than 80% of all data breaches, but the average help desk labor cost to reset a single password stands at more than $70.
Transmit says its biometric-based authenticator is the first natively passwordless identity and risk management solution, and it has already been adopted by a number of big-name brands including Lowes, Santander, and UBS. The solution, which currently handles more than 9,000 authentication requests per second, can reduce account resets by 96%, the company says, and reduces customer authentication from 1 minute to 2 seconds.
“By eliminating passwords, businesses can immediately reduce churn and cart abandonment and provide superior security for personal data,” said Transmit Security CEO Mickey Boodaei, who co-founded the company in 2014. “Our customers, whether they are in the retail, banking, financial, telecommunications, or automotive sectors, understand that providing an optimized identity experience is a multimillion-dollar challenge. With this latest round of funding from premier partners, we can significantly expand our reach to help rid the world of passwords.”
Transmit Security isn’t the only company that’s on a mission to kill off the password. Microsoft has announced plans to make Windows 10 password-free, and Apple recently previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.
The need for markets-focused competition watchdogs and consumer-centric privacy regulators to think outside their respective ‘legal silos’ and find creative ways to work together to tackle the challenge of big tech market power was the impetus for a couple of fascinating panel discussions organized by the Centre for Economic Policy Research (CEPR), which were livestreamed yesterday but are available to view on-demand here.
The conversations brought together key regulatory leaders from Europe and the US — giving a glimpse of what the future shape of digital markets oversight might look like at a time when fresh blood has just been injected to chair the FTC so regulatory change is very much in the air (at least around tech antitrust).
CEPR’s discussion premise is that integration, not merely intersection, of competition and privacy/data protection law is needed to get a proper handle on platform giants that have, in many cases, leveraged their market power to force consumers to accept an abusive ‘fee’ of ongoing surveillance.
That fee both strips consumers of their privacy and helps tech giants perpetuate market dominance by locking out interesting new competition (which can’t get the same access to people’s data so operates at a baked in disadvantage).
A running theme in Europe for a number of years now, since a 2018 flagship update to the bloc’s data protection framework (GDPR), has been the ongoing under-enforcement around the EU’s ‘on-paper’ privacy rights — which, in certain markets, means regional competition authorities are now actively grappling with exactly how and where the issue of ‘data abuse’ fits into their antitrust legal frameworks.
The regulators assembled for CEPR’s discussion included, from the UK, the Competition and Markets Authority’s CEO Andrea Coscelli and the information commissioner, Elizabeth Denham; from Germany, the FCO’s Andreas Mundt; from France, Henri Piffaut, VP of the French competition authority; and from the EU, the European Data Protection Supervisor himself, Wojciech Wiewiórowski, who advises the EU’s executive body on data protection legislation (and is the watchdog for EU institutions’ own data use).
The UK’s CMA now sits outside the EU, of course — giving the national authority a higher profile role in global mergers & acquisition decisions (vs pre-brexit), and the chance to help shape key standards in the digital sphere via the investigations and procedures it chooses to pursue (and it has been moving very quickly on that front).
The CMA has a number of major antitrust probes open into tech giants — including looking into complaints against Apple’s App Store and others targeting Google’s plan to depreciate support for third party tracking cookies (aka the so-called ‘Privacy Sandbox’) — the latter being an investigation where the CMA has actively engaged the UK’s privacy watchdog (the ICO) to work with it.
Only last week the competition watchdog said it was minded to accept a set of legally binding commitments that Google has offered which could see a quasi ‘co-design’ process taking place, between the CMA, the ICO and Google, over the shape of the key technology infrastructure that ultimately replaces tracking cookies. So a pretty major development.
Germany’s FCO has also been very active against big tech this year — making full use of an update to the national competition law which gives it the power to take proactive inventions around large digital platforms with major competitive significance — with open procedures now against Amazon, Facebook and Google.
The Bundeskartellamt was already a pioneer in pushing to loop EU data protection rules into competition enforcement in digital markets in a strategic case against Facebook, as we’ve reported before. That closely watched (and long running) case — which targets Facebook’s ‘superprofiling’ of users, based on its ability to combine user data from multiple sources to flesh out a single high dimension per-user profile — is now headed to Europe’s top court (so likely has more years to run).
But during yesterday’s discussion Mundt confirmed that the FCO’s experience litigating that case helped shape key amendments to the national law that’s given him beefier powers to tackle big tech. (And he suggested it’ll be a lot easier to regulate tech giants going forward, using these new national powers.)
“Once we have designated a company to be of ‘paramount significance’ we can prohibit certain conduct much more easily than we could in the past,” he said. “We can prohibit, for example, that a company impedes other undertaking by data processing that is relevant for competition. We can prohibit that a use of service depends on the agreement to data collection with no choice — this is the Facebook case, indeed… When this law was negotiated in parliament parliament very much referred to the Facebook case and in a certain sense this entwinement of competition law and data protection law is written in a theory of harm in the German competition law.
“This makes a lot of sense. If we talk about dominance and if we assess that this dominance has come into place because of data collection and data possession and data processing you need a parameter in how far a company is allowed to gather the data to process it.”
“The past is also the future because this Facebook case… has always been a big case. And now it is up to the European Court of Justice to say something on that,” he added. “If everything works well we might get a very clear ruling saying… as far as the ECN [European Competition Network] is concerned how far we can integrate GDPR in assessing competition matters.
“So Facebook has always been a big case — it might get even bigger in a certain sense.”
France’s competition authority and its national privacy regulator (the CNIL), meanwhile, have also been joint working in recent years.
Including over a competition complaint against Apple’s pro-user privacy App Tracking Transparency feature (which last month the antitrust watchdog declined to block) — so there’s evidence there too of respective oversight bodies seeking to bridge legal silos in order to crack the code of how to effectively regulate tech giants whose market power, panellists agreed, is predicated on earlier failures of competition law enforcement that allowed tech platforms to buy up rivals and sew up access to user data, entrenching advantage at the expense of user privacy and locking out the possibility of future competitive challenge.
The contention is that monopoly power predicated upon data access also locks consumers into an abusive relationship with platform giants which can then, in the case of ad giants like Google and Facebook, extract huge costs (paid not in monetary fees but in user privacy) for continued access to services that have also become digital staples — amping up the ‘winner takes all’ characteristic seen in digital markets (which is obviously bad for competition too).
Yet, traditionally at least, Europe’s competition authorities and data protection regulators have been focused on separate workstreams.
The consensus from the CEPR panels was very much that that is both changing and must change if civil society is to get a grip on digital markets — and wrest control back from tech giants to that ensure consumers and competitors aren’t both left trampled into the dust by data-mining giants.
Denham said her motivation to dial up collaboration with other digital regulators was the UK government entertaining the idea of creating a one-stop-shop ‘Internet’ super regulator. “What scared the hell out of me was the policymakers the legislators floating the idea of one regulator for the Internet. I mean what does that mean?” she said. “So I think what the regulators did is we got to work, we got busy, we become creative, got our of our silos to try to tackle these companies — the likes of which we have never seen before.
“And I really think what we have done in the UK — and I’m excited if others think it will work in their jurisdictions — but I think that what really pushed us is that we needed to show policymakers and the public that we had our act together. I think consumers and citizens don’t really care if the solution they’re looking for comes from the CMA, the ICO, Ofcom… they just want somebody to have their back when it comes to protection of privacy and protection of markets.
“We’re trying to use our regulatory levers in the most creative way possible to make the digital markets work and protect fundamental rights.”
During the earlier panel, the CMA’s Simeon Thornton, a director at the authority, made some interesting remarks vis-a-vis its (ongoing) Google ‘Privacy Sandbox’ investigation — and the joint working it’s doing with the ICO on that case — asserting that “data protection and respecting users’ rights to privacy are very much at the heart of the commitments upon which we are currently consulting”.
“If we accept the commitments Google will be required to develop the proposals according to a number of criteria including impacts on privacy outcomes and compliance with data protection principles, and impacts on user experience and user control over the use of their personal data — alongside the overriding objective of the commitments which is to address our competition concerns,” he went on, adding: “We have worked closely with the ICO in seeking to understand the proposals and if we do accept the commitments then we will continue to work closely with the ICO in influencing the future development of those proposals.”
“If we accept the commitments that’s not the end of the CMA’s work — on the contrary that’s when, in many respects, the real work begins. Under the commitments the CMA will be closely involved in the development, implementation and monitoring of the proposals, including through the design of trials for example. It’s a substantial investment from the CMA and we will be dedicating the right people — including data scientists, for example, to the job,” he added. “The commitments ensure that Google addresses any concerns that the CMA has. And if outstanding concerns cannot be resolved with Google they explicitly provide for the CMA to reopen the case and — if necessary — impose any interim measures necessary to avoid harm to competition.
“So there’s no doubt this is a big undertaking. And it’s going to be challenging for the CMA, I’m sure of that. But personally I think this is the sort of approach that is required if we are really to tackle the sort of concerns we’re seeing in digital markets today.”
Thornton also said: “I think as regulators we do need to step up. We need to get involved before the harm materializes — rather than waiting after the event to stop it from materializing, rather than waiting until that harm is irrevocable… I think it’s a big move and it’s a challenging one but personally I think it’s a sign of the future direction of travel in a number of these sorts of cases.”
Also speaking during the regulatory panel session was FTC commissioner Rebecca Slaughter — a dissenter on the $5BN fine it hit Facebook with back in 2019 for violating an earlier consent order (as she argued the settlement provided no deterrent to address underlying privacy abuse, leaving Facebook free to continue exploiting users’ data) — as well as Chris D’Angelo, the chief deputy AG of the New York Attorney General, which is leading a major states antitrust case against Facebook.
Slaughter pointed out that the FTC already combines a consumer focus with attention on competition but said that historically there has been separation of divisions and investigations — and she agreed on the need for more joined-up working.
She also advocated for US regulators to get out of a pattern of ineffective enforcement in digital markets on issues like privacy and competition where companies have, historically, been given — at best — what amounts to wrist slaps that don’t address root causes of market abuse, perpetuating both consumer abuse and market failure. And be prepared to litigate more.
As regulators toughen up their stipulations they will need to be prepared for tech giants to push back — and therefore be prepared to sue instead of accepting a weak settlement.
“That is what is most galling to me that even where we take action, in our best faith good public servants working hard to take action, we keep coming back to the same questions, again and again,” she said. “Which means that the actions we are taking isn’t working. We need different action to keep us from having the same conversation again and again.”
Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.
“I want to sound a note of caution around approaches that are centered around user control,” she said. “I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.
“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.
“I think there will be ongoing debates about privacy legislation in the US and while I’m actually a very strong advocate for a better federal framework with more tools that facilitate aggressive enforcement but I think if we had done it ten years ago we probably would have ended up with a notice and consent privacy law and I think that that would have not been a great outcome for consumers at the end of the day. So I think the debate and discussion has evolved in an important way. I also think we don’t have to wait for Congress to act.”
As regards more radical solutions to the problem of market-denting tech giants — such as breaking up sprawling and (self-servingly) interlocking services empires — the message from Europe’s most ‘digitally switched on’ regulators seemed to be don’t look to us for that; we are going to have to stay in our lanes.
So tl;dr — if antitrust and privacy regulators’ joint working just sums to more intelligent fiddling round the edges of digital market failure, and it’s break-ups of US tech giants that’s what’s really needed to reboot digital markets, then it’s going to be up to US agencies to wield the hammers. (Or, as Coscelli elegantly phrased it: “It’s probably more realistic for the US agencies to be in the lead in terms of structural separation if and when it’s appropriate — rather than an agency like ours [working from inside a mid-sized economy such as the UK’s].”)
The lack of any representative from the European Commission on the panel was an interesting omission in that regard — perhaps hinting at ongoing ‘structural separation’ between DG Comp and DG Justice where digital policymaking streams are concerned.
The current competition chief, Margrethe Vestager — who also heads up digital strategy for the bloc, as an EVP — has repeatedly expressed reluctance to impose radical ‘break up’ remedies on tech giants. She also recently preferred to waive through another Google digital merger (its acquisition of fitness wearable Fitbit) — agreeing to accept a number of ‘concessions’ and ignoring major mobilization by civil society (and indeed EU data protection agencies) urging her to block it.
Yet in an earlier CEPR discussion session, another panellist — Yale University’s Dina Srinivasan — pointed to the challenges of trying to regulate the behavior of companies when there are clear conflicts of interest, unless and until you impose structural separation as she said has been necessary in other markets (like financial services).
“In advertising we have an electronically traded market with exchanges and we have brokers on both sides. In a competitive market — when competition was working — you saw that those brokers were acting in the best interest of buyers and sellers. And as part of carrying out that function they were sort of protecting the data that belonged to buyers and sellers in that market, and not playing with the data in other ways — not trading on it, not doing conduct similar to insider trading or even front running,” she said, giving an example of how that changed as Google gained market power.
“So Google acquired DoubleClick, made promises to continue operating in that manner, the promises were not binding and on the record — the enforcement agencies or the agencies that cleared the merger didn’t make Google promise that they would abide by that moving forward and so as Google gained market power in that market there’s no regulatory requirement to continue to act in the best interests of your clients, so now it becomes a market power issue, and after they gain enough market power they can flip data ownership and say ‘okay, you know what before you owned this data and we weren’t allowed to do anything with it but now we’re going to use that data to for example sell our own advertising on exchanges’.
“But what we know from other markets — and from financial markets — is when you flip data ownership and you engage in conduct like that that allows the firm to now build market power in yet another market.”
The CMA’s Coscelli picked up on Srinivasan’s point — saying it was a “powerful” one, and that the challenges of policing “very complicated” situations involving conflicts of interests is something that regulators with merger control powers should be bearing in mind as they consider whether or not to green light tech acquisitions.
(Just one example of a merger in the digital space that the CMA is still scrutizing is Facebook’s acquisition of animated GIF platform Giphy. And it’s interesting to speculate whether, had brexit happened a little faster, the CMA might have stepped in to block Google’s Fitibit merger where the EU wouldn’t.)
Coscelli also flagged the issue of regulatory under-enforcement in digital markets as a key one, saying: “One of the reasons we are today where we are is partially historic under-enforcement by competition authorities on merger control — and that’s a theme that is extremely interesting and relevant to us because after the exit from the EU we now have a bigger role in merger control on global mergers. So it’s very important to us that we take the right decisions going forward.”
“Quite often we intervene in areas where there is under-enforcement by regulators in specific areas… If you think about it when you design systems where you have vertical regulators in specific sectors and horizontal regulators like us or the ICO we are more successful if the vertical regulators do their job and I’m sure they are more success if we do our job properly.
“I think we systematically underestimate… the ability of companies to work through whatever behavior or commitments or arrangement are offered to us, so I think these are very important points,” he added, signalling that a higher degree of attention is likely to be applied to tech mergers in Europe as a result of the CMA stepping out from the EU’s competition regulation umbrella.
Also speaking during the same panel, the EDPS warned that across Europe more broadly — i.e. beyond the small but engaged gathering of regulators brought together by CEPR — data protection and competition regulators are far from where they need to be on joint working, implying that the challenge of effectively regulating big tech across the EU is still a pretty Sisyphean one.
It’s true that the Commission is not sitting on hands in the face of tech giant market power.
At the end of last year it proposed a regime of ex ante regulations for so-called ‘gatekeeper’ platforms, under the Digital Markets Act. But the problem of how to effectively enforce pan-EU laws — when the various agencies involved in oversight are typically decentralized across Member States — is one key complication for the bloc. (The Commission’s answer with the DMA was to suggest putting itself in charge of overseeing gatekeepers but it remains to be seen what enforcement structure EU institutions will agree on.)
Clearly, the need for careful and coordinated joint working across multiple agencies with different legal competencies — if, indeed, that’s really what’s needed to properly address captured digital markets vs structural separation of Google’s search and adtech, for example, and Facebook’s various social products — steps up the EU’s regulatory challenge in digital markets.
“We can say that no effective competition nor protection of the rights in the digital economy can be ensured when the different regulators do not talk to each other and understand each other,” Wiewiórowski warned. “While we are still thinking about the cooperation it looks a little bit like everybody is afraid they will have to trade a little bit of its own possibility to assess.”
“If you think about the classical regulators isn’t it true that at some point we are reaching this border where we know how to work, we know how to behave, we need a little bit of help and a little bit of understanding of the other regulator’s work… What is interesting for me is there is — at the same time — the discussion about splitting of the task of the American regulators joining the ones on the European side. But even the statements of some of the commissioners in the European Union saying about the bigger role the Commission will play in the data protection and solving the enforcement problems of the GDPR show there is no clear understanding what are the differences between these fields.”
One thing is clear: Big tech’s dominance of digital markets won’t be unpicked overnight. But, on both sides of the Atlantic, there are now a bunch of theories on how to do it — and growing appetite to wade in.
The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.
Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.
“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.
“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.
“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”
“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.
“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘Big Data’ systems — LFR is supercharged CCTV.”
The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.
Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.
In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)
But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.
A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.
There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.
The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.
A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.
(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)
For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.
Controllers must also enable individuals to exercise their rights, the opinion also said.
“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.
“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”
The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.
If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.
Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.
Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening.
Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”
The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)
Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).
“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.
Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.
“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.
“Without trust, the benefits the technology may offer are lost,” she also warned.
There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).
The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…
Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.
Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.
The AI-powered defense company founded by tech iconoclast Palmer Luckey has landed a $450 million round of investment that values the startup at $4.6 billion just four years in.
In April, reports suggested that the company was on the hunt for fresh investment and headed for a valuation between four and five billion, up from $1.9 billion in July 2020.
The new Series D round was led by angel investor and serial entrepreneur Elad Gil, a former Twitter VP and Googler with a track record of investments in companies with exponential growth. Andreessen Horowitz, Founders Fund, 8VC, General Catalyst, Lux Capital, Valor Equity Partners and D1 Capital Partners also participated in the round.
“Just as old incumbent institutions with little to no organizational renewal impacted our ability to respond to COVID, the defense industry has undergone significant consolidation over the last 30 years,” Gil wrote in a blog post on the investment. “There has not been a new defense technology company of any scale to directly challenge these incumbents in many decades…”
Anduril launched quietly in 2017 but grew quickly, picking up contracts with Customs and Border Protection and the Marine Corps during the Trump administration. Luckey, the young high-flying founder who sold Oculus to Facebook before being booted from the company, emerged as one of President Trump’s most prominent boosters in the generally Trump-averse tech industry.
The company makes defense hardware, including long-flying drones and surveillance towers that connect to a shared software platform it calls Lattice. The technology can be used to secure military bases, monitor borders and even knock enemy drones out of the sky, in the case of Anduril’s counter-UAS tech known as “Anvil.”
Anduril co-founder and CEO Brian Schimpf describes the company’s mission as one of “transformation,” pairing relatively affordable hardware with sensor fusion and machine learning technologies through a contract partner more nimble than established giants in the defense sector.
“This new round of funding reflects our confidence that the Department of Defense sees the same problems we do, and is serious about deploying emerging technologies at scale across land, sea, air and space domains,” Schimpf said.
The company set its sights on work with the Department of Defense from its earliest days and last year was one of 50 vendors tapped by the DoD to test tech for the Air Force’s own piece of the Joint All-Domain Command & Control (JADC2) project, which seeks to build a smart warfare platform to connect all service members, devices and vehicles that power the U.S. military.
The company’s work with U.S. Customs and Border Protection also matured from a pilot into a program of record last year. Anduril supplies the agency with connected surveillance towers capable of autonomously monitoring stretches of the U.S. border.
In April, Anduril acquired Area-I, a company known for small drones that can be launched from a larger aircraft. Area-I counted the U.S. Army, Air Force, Navy and NASA among its customers, relationships that likely sweetened the deal.
Claroty, an industrial cybersecurity company that helps customers protect and manage their Internet of Things (IoT) and operational technology (OT) assets, has raised $140 million in its latest, and potentially last, round of funding.
With the new round of Series D funding, co-led by Bessemer and 40 North, the company has now amassed a total of $235 million. Additional strategic investors include LG and I Squared Capital’s ISQ Global InfraTech Fund, with all previous investors — Team8, Rockwell Automation, Siemens and Schneider Electric — also participating.
Founded in 2015, the late-stage startup focuses on the industrial side of cybersecurity. Its customers include General Motors, Coca-Cola EuroPacific Partners and Pfizer, with Claroty helping the pharmaceutical firm to secure its COVID-19 vaccine supply chain. Claroty tells TechCrunch it has seen “significant” customer growth over the past 18 months, largely fueled by the pandemic, with 110% year-over-year net new logo growth and 100% customer retention.
It will use the newly raised funds to meet this rapidly accelerating global demand for The Claroty Platform, an end-to-end solution that provides visibility into industrial networks and combines secure remote access with continuous monitoring for threats and vulnerabilities.
“Our mission is to drive visibility, continuity and resiliency in the industrial economy by delivering the most comprehensive solutions that secure all connected devices within the four walls of an industrial site, including all operational technology (OT), Internet of Things (IoT) and industrial IoT (IIoT) assets,” said Claroty CEO Yaniv Vardi.
To meet this growing demand, the startup is planning to expand into new regions and verticals, including transportation and government-owned industries, as well as increase its global headcount. The company, which is based in New York, currently has around 240 employees.
Claroty hasn’t yet made any acquisitions, though CEO Yaniv Vardi tells TechCrunch that this could be part of the startup’s roadmap going forward.
“We’re waiting for the right opportunity at the right time, but it’s definitely part of the plan as part of the financial runway we just secured,” he said, adding that this latest funding round will likely be the company’s last before it explores a potential IPO.
“We are thinking that this is a pre-IPO funding round,” he said. “The end goal here is to be the market leader for industrial cybersecurity. One of the mascots can be going public with an IPO, but there are different options too, such as SPAC.”
The funding round comes amid a sharp increase in cyber targeting organizations that underpin the world’s critical infrastructure and supply chains. According to a recent survey carried out by Claroty, the majority (53%) of U.S. industrial enterprises have seen an increase in cybersecurity threats since the start of 2020. The survey of 1,110 IT and OT security professionals also found that over half believed their organization is now more of a target for cybercriminals, with 67% having seen cybercriminals use new tactics amid the pandemic.
“The number of attacks, and impact of these attacks, is increasing significantly, especially in verticals like food, automotive, and critical infrastructure. Vardi said. “That creates a lot of risk assessments public companies had to do, and these risks needed to be addressed with a security solution on the industrial side.”