FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — April 22nd 2021Your RSS feeds

Applied XL raises $1.5M to build ‘editorial algorithms’ that track real-time data

By Anthony Ha

AppliedXL, a startup creating machine learning tools with what it describes as a journalistic lens, is announcing that it has raised $1.5 million in seed funding.

Emerging from the Newlab Venture Studio last year, the company is led by CEO Francesco Marconi (previously R&D chief at The Wall Street Journal) and CTO Erin Riglin (former WSJ automation editor). Marconi told me that AppliedXL started out by working on a number of different data and machine learning projects as it looked for product-market fit — but it’s now ready to focus on its first major industry, life sciences, with a product launching broadly this summer.

He said that AppliedXL’s technology consists of “essentially a swarm of editorial algorithms developed by computational journalists.” These algorithms benefit from “the point of view and expertise of journalists, as well as taking into account things like transparency and bias and other issues that derive from straightforward machine learning development.”

Marconi compared the startup to Bloomberg and Dow Jones, suggesting that just as those companies were able to collect and standardize financial data, AppliedXL will do the same in a variety of other industries.

He suggested that it makes sense to start with life sciences because there’s both a clear need and high demand. Customers might include competitive intelligence teams as pharmaceutical companies and life sciences funds, which might normally try to track this data by searching large databases and receiving “data vomit” in response.

“Our solution for scaling [the ability to spot] newsworthy events is to design the algorithms with the same principles that a journalist would approach a story or an investigation,” Marconi said. “It might be related to the size of the study and the number of patients, it might be related to a drug that is receiving a lot of attention in terms of R&D investment. All of these criteria that science journalist would bring to clinical trials, we’re encoding that into algorithms.”

Eventually, Marconi said the startup could expand into other categories, building industry-“micro models.” Broadly speaking, he suggested that the company’s mission is “measuring the health of people, places and the planet.”

The seed funding was led by Tuesday Capital, with participation from Frog Ventures, Team Europe and Correlation Ventures.

“With industry leading real-time data pipelining, Applied XL is building the tools and platform for the next generation of data-based decision making that business leaders will rely on for decades,” said Tuesday Capital Partner Prashant Fonseka in a statement. “Data is the new oil and the team at Applied XL have figured out how to identify, extract and leverage one of the most valuable commodities in the world.”

 

Medchart raises $17M to help businesses more easily access patient-authorized health data

By Darrell Etherington

Electronic health records (EHR) have long held promise as a means of unlocking new superpowers for caregiving and patients in the medical industry, but while they’ve been a thing for a long time, actually accessing and using them hasn’t been as quick to become a reality. That’s where Medchart comes in, providing access to health information between businesses, complete with informed patient consent, for using said data at scale. The startup just raised $17 million across Series A and seed rounds, led by Crosslink Capital and Golden Ventures, and including funding from Stanford Law School, rapper Nas and others.

Medchart originally started out as more of a DTC play for healthcare data, providing access and portability to digital health information directly to patients. It sprung from the personal experience of co-founders James Bateman and Derrick Chow, who both faced personal challenges accessing and transferring health record information for relatives and loved ones during crucial healthcare crisis moments. Bateman, Medchart’s CEO, explained that their experience early on revealed that what was actually needed for the model to scale and work effectively was more of a B2B approach, with informed patient consent as the crucial component.

“We’re really focused on that patient consent and authorization component of letting you allow your data to be used and shared for various purposes,” Bateman said in an interview. “And then building that platform that lets you take that data and then put it to use for those businesses and services, that we’re classifying as ‘beyond care.’ Whether those are our core areas, which would be with your, your lawyer, or with an insurance provider, or clinical researcher — or beyond that, looking at a future vision of this really being a platform to power innovation, and all sorts of different apps and services that you could imagine that are typically outside that realm of direct care and treatment.”

Bateman explained that one of the main challenges in making patient health data actually work for these businesses that surround, but aren’t necessarily a core part of a care paradigm, is delivering data in a way that it’s actually useful to the receiving party. Traditionally, this has required a lot of painstaking manual work, like paralegals poring over paper documents to find information that isn’t necessarily consistently formatted or located.

“One of the things that we’ve been really focused on is understanding those business processes,” Bateman said. “That way, when we work with these businesses that are using this data — all permissioned by the patient — that we’re delivering what we call ‘the information,’ and not just the data. So what are the business decision points that you’re trying to make with this data?”

To accomplish this, Medchart makes use of AI and machine learning to create a deeper understanding of the data set in order to be able to intelligently answer the specific questions that data requesters have of the information. Therein lies their longterm value, since once that understanding is established, they can query the data much more easily to answer different questions depending on different business needs, without needing to re-parse the data every single time.

“Where we’re building these systems of intelligence on top of aggregate data, they are fully transferable to making decisions around policies for, for example, life insurance underwriting, or with pharmaceutical companies on real world evidence for their phase three, phase four clinical trials, and helping those teams to understand, you know, the the overall indicators and the preexisting conditions and what the outcomes are of the drugs under development or whatever they’re measuring in their study,” Bateman said.”

According to Ameet Shah, Partner at co-lead investor for the Series A Golden Ventures, this is the key ingredient in what Medchart is offering that makes the company’s offering so attractive in terms of long-term potential.

“What you want is you both depth and breadth, and you need predictability — you need to know that you’re actually getting like the full data set back,” Shah said in an interview. “There’s all these point solutions, depending on the type of clinic you’re looking at, and the type of record you’re accessing, and that’s not helpful to the requester. Right now, you’re putting the burden on them, and when we looked at it, we were just like ‘Oh, this is just a whole bunch of undifferentiated heavy lifting that the entire health tech ecosystem is trying to like solve for. So if [Medchart] can just commoditize that and drive the cost down as low as possible, you can unlock all these other new use cases that never could have been done before.”

One recent development that positions Medchart to facilitate even more novel use cases of patient data is the 21st Century Cures Act, which just went into effect on April 5, provides patients with immediate access, without charge, to all the health information in their electronic medical records. That sets up a huge potential opportunity in terms of portability, with informed consent, of patient data, and Bateman suggests it will greatly speed up innovation built upon the type of information access Medchart enables.

“I think there’s just going to be an absolute explosion in this space over the next two to three years,” Bateman said. “And at Medchart, we’ve already built all the infrastructure with connections to these large information systems. We’re already plugged in and providing the data and the value to the end users and the customers, and I think now you’re going to see this acceleration and adoption and growth in this area that we’re super well-positioned to be able to deliver on.”

Micromobility’s next big business is software, not vehicles

By Rebecca Bellan

The days of the shared, dockless micromobility model are numbered. That’s essentially the conclusion reached by Puneeth Meruva, an associate at Trucks Venture Capital who recently authored a detailed research brief on micromobility. Meruva is of the opinion that the standard for permit-capped, dockless scooter-sharing is not sustainable — the overhead is too costly, the returns too low — and that the industry could splinter.

Most companies playing to win have begun to vertically integrate their tech stacks by developing or acquiring new technology.

“Because shared services have started a cultural transition, people are more open to buying their own e-bike or e-scooter,” Meruva told TechCrunch. “Fundamentally because of how much city regulation is involved in each of these trips, it could reasonably become a transportation utility that is very useful for the end consumer, but it just hasn’t proven itself to be a profitable line of business.”

As dockless e-scooters, e-bikes and e-mopeds expand their footprint while consolidating under a few umbrella corporations, companies might develop or acquire the technology to streamline and reduce operational costs enough to achieve unit economics. One overlooked but massive factor in the micromobility space is the software that powers the vehicles — who owns it, if it’s made in-house and how well it integrates with the rest of the tech stack.

It’s the software that can determine if a company breaks out of the rideshare model into the sales or subscription model, or becomes subsidized by or absorbed into public transit, Meruva predicts.

Vehicle operating systems haven’t been top of mind for most companies in the short history of micromobility. The initial goal was making sure the hardware didn’t break down or burst into flames. When e-scooters came on the scene, they caused a ruckus. Riders without helmets zipped through city streets and many vehicles ended up in ditches or blocking sidewalk accessibility.

City officials were angry, to say the least, and branded dockless modes of transport a public nuisance. However, micromobility companies had to answer to their overeager investors — the ones who missed out on the Uber and Lyft craze and threw millions at electric mobility, hoping for swift returns. What was a Bird or a Lime to do? The only thing to do: Get back on that electric two-wheeler and start schmoozing cities.

How the fight for cities indirectly improved vehicle software

Shared, dockless operators are currently in a war of attrition, fighting to get the last remaining city permits. But as the industry seeks a business to government (B2G) model that morphs into what companies think cities want, some are inadvertently producing vehicles that will evolve beyond functional toys and into more viable transportation alternatives.

The second wave of micromobility was marked by newer companies like Superpedestrian and Voi Technology. They learned from past industry mistakes and developed business strategies that include building onboard operating systems in-house. The goal? More control over rider behavior and better compliance with city regulations.

Most companies playing to win have begun to vertically integrate their tech stacks by developing or acquiring new technology. Lime, Bird, Superpedestrian, Spin and Voi all design their own vehicles and write their own fleet management software or other operational tools. Lime writes its own firmware, which sits directly on top of the vehicle hardware primitives and helps control things like motor controllers, batteries and connected lights and locks.

Before yesterdayYour RSS feeds

AI-driven audio cloning startup gives voice to Einstein chatbot

By Natasha Lomas

You’ll need to prick up your ears for this slice of deepfakery emerging from the wacky world of synthesized media: A digital version of Albert Einstein — with a synthesized voice that’s been (re)created using AI voice cloning technology drawing on audio recordings of the famous scientist’s actual voice.

The startup behind the ‘uncanny valley’ audio deepfake of Einstein is Aflorithmic (whose seed round we covered back in February).

While the video engine powering the 3D character rending components of this ‘digital human’ version of Einstein is the work of another synthesized media company — UneeQ — which is hosting the interactive chatbot version on its website.

Alforithmic says the ‘digital Einstein’ is intended as a showcase for what will soon be possible with conversational social commerce. Which is a fancy way of saying deepfakes that make like historical figures will probably be trying to sell you pizza soon enough, as industry watchers have presciently warned.

The startup also says it sees educational potential in bringing famous, long deceased figures to interactive ‘life’.

Or, well, an artificial approximation of it — the ‘life’ being purely virtual and Digital Einstein’s voice not being a pure tech-powered clone either; Alforithmic says it also worked with an actor to do voice modelling for the chatbot (because how else was it going to get Digital Einstein to be able to say words the real-deal would never even have dreamt of saying — like, er, ‘blockchain’?). So there’s a bit more than AI artifice going on here too.

“This is the next milestone in showcasing the technology to make conversational social commerce possible,” Alforithmic’s COO Matt Lehmann told us. “There are still more than one flaws to iron out as well as tech challenges to overcome but overall we think this is a good way to show where this is moving to.”

In a blog post discussing how it recreated Einstein’s voice the startup writes about progress it made on one challenging element associated with the chatbot version — saying it was able to shrink the response time between turning around input text from the computational knowledge engine to its API being able to render a voiced response, down from an initial 12 seconds to less than three (which it dubs “near-real-time”). But it’s still enough of a lag to ensure the bot can’t escape from being a bit tedious.

Laws that protect people’s data and/or image, meanwhile, present a legal and/or ethical challenge to creating such ‘digital clones’ of living humans — at least not without asking (and most likely paying) first.

Of course historical figures aren’t around to ask awkward questions about the ethics of their likeness being appropriated for selling stuff (if only the cloning technology itself, at this nascent stage). Though licensing rights may still apply — and do in fact in the case of Einstein.

“His rights lie with the Hebrew University of Jerusalem who is a partner in this project,” says Lehmann, before ‘fessing up to the artist licence element of the Einstein ‘voice cloning’ performance. “In fact, we actually didn’t clone Einstein’s voice as such but found inspiration in original recordings as well as in movies. The voice actor who helped us modelling his voice is a huge admirer himself and his performance captivated the character Einstein very well, we thought.”

Turns out the truth about high-tech ‘lies’ is itself a bit of a layer cake. But with deepfakes it’s not the sophistication of the technology that matters so much as the impact the content has — and that’s always going to depend upon context. And however well (or badly) the faking is done, how people respond to what they see and hear can shift the whole narrative — from a positive story (creative/educational synthesized media) to something deeply negative (alarming, misleading deepfakes).

Concern about the potential for deepfakes to become a tool for disinformation is rising, too, as the tech gets more sophisticated — helping to drive moves toward regulating AI in Europe, where the two main entities responsible for ‘Digital Einstein’ are based.

Earlier this week a leaked draft of an incoming legislative proposal on pan-EU rules for ‘high risk’ applications of artificial intelligence included some sections specifically targeted at deepfakes.

Under the plan, lawmakers look set to propose “harmonised transparency rules” for AI systems that are designed to interact with humans and those used to generate or manipulate image, audio or video content. So a future Digital Einstein chatbot (or sales pitch) is likely to need to unequivocally declare itself artificial before it starts faking it — to avoid the need for Internet users to have to apply a virtual Voight-Kampff test.

For now, though, the erudite-sounding interactive Digital Einstein chatbot still has enough of a lag to give the game away. Its makers are also clearly labelling their creation in the hopes of selling their vision of AI-driven social commerce to other businesses.

Oxbotica raises $13.8M from Ocado to build autonomous vehicle tech for the online grocer’s logistics network

By Ingrid Lunden

Ocado, the UK online grocer that has been making strides reselling its technology to other grocery companies to help them build and run their own online ordering-and-delivery operations, is making an investment today into what it believes will be the next chapter of how that business will grow: it is taking a £10 million ($13.8 million) stake in Oxbotica, a UK startup that develops autonomous driving systems.

Ocado is treating this as a strategic investment to develop AI-powered, self-driving systems that will work across its operations, from vehicles within and around its packing warehouses through to the last-mile vehicles that deliver grocery orders to people’s homes. It says it expects the first products to come out of this deal — most likely in closed environments like warehouses rather the less structured prospect of open streets — to be online in two years.

“We are not constraining ourselves to work in any one use case,” said Alex Harvey, chief of advanced technology at Ocado, in an interview. But to roll out auotonomous systems everwhere, he added, “we realize there are areas where we will need regulatory compliance,” among other factors. The deal is non-exclusive, and both can work with other partners if they choose, the companies confirmed.

The investment is coming as an extension to Oxbotica’s Series B that it announced in January, bringing the total size of the round — which was led by bp ventures, the investing arm of oil and gas giant bp, and also included BGF, safety equipment maker Halma, pension fund HostPlus, IP Group, Tencent, Venture Science and funds advised by Doxa Partners — to over $60 million. Oxbotica has not disclosed valuation but Paul Newman, co-founder and CTO of Oxbotica, confirmed in an interview that the valuation went up with this latest investment.

The timing of the news is very interesting. It comes just one day (less than 24 hours in fact) after Walmart in the US took a stake in Cruise, another autonomous tech company, as part of recent $2.75B monster round.

Walmart, until February, owned one of Ocado’s big competitors in the UK, ASDA; and Ocado has made its first forays into the US, by way of its deal to power Kroger’s online grocery business, which went live this week, too. So it seems that competition between these two is heating up on the food front.

More generally, there has been a huge surge in the world of online grocery order and delivery services in the last year. Earlier movers like online-only Ocado, Tesco in the UK (which owns both physical stores and online networks), and Instacart in the US have seen record demand, but they have also been joined by a lot of competition from well-capitalized newer entrants also keen to seize that opportunity, and bringing different approaches (next-hour delivery, smaller baskets, specific products) to do so.

In Ocado’s home patch of Europe, other big names looking to extend outside of their home turfs include Oda (formerly Kolonial); Rohlik out of the Czech Republic (which in March bagged $230 million in funding); Everli out of Italy (formerly called Supermercato24, it raised $100 million); Picnic out of the Netherlands (which has yet to announce any recent funding but it feels like it’s only a matter of time given it too has publicly laid out international ambitions). Even Ocado has raised huge amounts of money to pursue its own international ambitions. And that’s before you consider the nearly dozens of next-hour, smaller bag grocery delivery plays.

A lot of these companies will have had a big year last year, not least because of the pandemic and how it drove many people to stay at home, and stay away from places where they might catch and spread the Covid-19 virus.

But now, the big question will be how that market will look in the future as peoples go back to “normal” life.

As we pointed out earlier this week, Ocado has already laid out how demand is lower, although still higher than pre-pandemic times. And indeed, the new-new normal (if we can call it that) may well see the competitive landscape tighten some more.

That  could also be one reason why companies like Ocado are putting more money into working on what might be the next generation of services: one more efficient and run purely (or at least mostly) on technology.

The rationale of forking out big for autonomous tech, which is still largely untested and very, very expensive technology, to save money is a long-term play. Logistics today accounts for some 10% of the total cost of a grocery delivery operation. But that figure goes up when there is peak demand or anything that disrupts regularly scheduled services.

My guess is also that with all of the subsidized services that are flying about right now, where you see free deliveries or discounts on groceries to encourage new business — a result of the market getting so competitive — those logistics have bled into being an even bigger cost.

So it’s no surprise to see the biggest players in this space looking at ways that it might leverage advances in technology to cut those costs and speed up how those operations work, even if it’s just a promise of discounts in years, not weeks. Of course investors might see it otherwise if that doesn’t go to plan.

In addition to this collaboration with Oxbotica, Ocado said it will be looking to make more investments and/or partnerships as it grows and develops its autonomous vehicle capabilities. While this is the company’s first investment into Oxbotica, it has made a number of investments into other startups, and collaborated to work on the next stage of technology. This has included research to build a robotic arm — which robotic pickers is something it will be introducing soon — as well as the recent acquisition of two robotics companies, Kindred and Haddington, for $262 million; and investments in robotics startups Karakuri and Myrmex, and more,

Notably, Oxbotica and Ocado are not strangers. They started to work together on a delivery pilot back in 2017. You can see a video of how that delivery service looks here:

“This is an excellent opportunity for Oxbotica and Ocado to strengthen our partnership, sharing our vision for the future of autonomy,” said Newman, in a statement. “By combining both companies’ cutting-edge knowledge and resources, we hope to bring our Universal Autonomy vision to life and continue to solve some of the world’s most complex autonomy challenges.”

But as with all self-driving technology — incredibly complex and full of regulatory and safety hurdles — we are still fairly far from full commercial systems that actually remove people from the equation completely.

“For both regulatory and complexity reasons, Ocado expects that the development of vehicles that operate in low-speed urban areas or in restricted access areas, such as inside its CFC buildings or within its CFC yards, may become a reality sooner than fully-autonomous deliveries to consumers’ homes,” Ocado notes in its statement on the deal. “However, all aspects of autonomous vehicle development will be within the scope of this collaboration. Ocado expects to see the first prototypes of some early use cases for autonomous vehicles within two years.”

Newman noted that while on-street self-driving might still be some years away, it’s less of a moonshot concept today than it used to be, and that Oxbotica is on the road to it already. “You can get to the moon in stages,” he said.

Updated with interviews with both companies, and to correct that Walmart closed its deal to sell ASDA in February.

MEPs call for European AI rules to ban biometric surveillance in public

By Natasha Lomas

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

Tecton teams with founder of Feast open source machine learning feature store

By Ron Miller

Tecton, the company that pioneered the notion of the machine learning feature store, has teamed up with the founder of the open source feature store project called Feast. Today the company announced the release of version 0.10 of the open source tool.

The feature store is a concept that the Tecton founders came up with when they were engineers at Uber. Shortly thereafter an engineer named Willem Pienaar read the founder’s Uber blog posts on building a feature store and went to work building Feast as an open source version of the concept.

“The idea of Tecton [involved bringing] feature stores to the industry, so we build basically the best in class, enterprise feature store. […] Feast is something that Willem created, which I think was inspired by some of the early designs that we published at Uber. And he built Feast and it evolved as kind of like the standard for open source feature stores, and it’s now part of the Linux Foundation,” Tecton co-founder and CEO Mike Del Balso explained.

Tecton later hired Pienaar, who is today an engineer at the company where he leads their open source team. While the company did not originally start off with a plan to build an open source product, the two products are closely aligned, and it made sense to bring Pienaar on board.

“The products are very similar in a lot of ways. So I think there’s a similarity there that makes this somewhat symbiotic, and there is no explicit convergence necessary. The Tecton product is a superset of what Feast has. So it’s an enterprise version with a lot more advanced functionality, but at Feast we have a battle-tested feature store that’s open source,” Pienaar said.

As we wrote in a December 2020 story on the company’s $35 million Series B, it describes a feature store as “an end-to-end machine learning management system that includes the pipelines to transform the data into what are called feature values, then it stores and manages all of that feature data and finally it serves a consistent set of data.”

Del Balso says that from a business perspective, contributing to the open source feature store exposes his company to a different group of users, and the commercial and open source products can feed off one another as they build the two products.

“What we really like, and what we feel is very powerful here, is that we’re deeply in the Feast community and get to learn from all of the interesting use cases […] to improve the Tecton product. And similarly, we can use the feedback that we’re hearing from our enterprise customers to improve the open source project. That’s the kind of cross learning, and ideally that feedback loop involved there,” he said.

The plan is for Tecton to continue being a primary contributor with a team inside Tecton dedicated to working on Feast. Today, the company is releasing version 0.10 of the project.

Sales scheduling platform Chili Piper raises $33M Series B funding led by Tiger Global

By Mike Butcher

Chili Piper, which has a sophisticated SaaS appointment scheduling platform for sales teams, has raised a $33 million B round led by Tiger Global. Existing investors Base10 Partners and Gradient Ventures (Google’s AI-focused VC) also participated. This brings the company’s total financing to $54 million. The company will use the capital raised to accelerate product development. The previous $18M A round was led by Base10 and Google’s Gradient Ventures 9 months ago.

It’s main competitor is Calendly, started 21/2 years previously, which recently achieved a $3Bn valuation.

Launched in 2016, Chili Piper’s software for B2B revenue teams is designed to convert leads into attended meetings. Sales teams can also use it to book demos, increase inbound conversion rates, eliminate manual lead routing, and streamline critical processes around meetings. It’s used by Intuit, Twilio, Forrester, Spotify, and Gong.

Chili Piper has a number of different tools for businesses to schedule and calendar accountments, but its key USP is in its use by ‘inbound SDR Sales Development Representatives (SDR)’, who are responsible for qualifying inbound sales leads. It’s particularly useful in scheduling calls when customers hit websites ask for a salesperson to call them back.

Nicolas Vandenberghe, CEO, and co-founder of Chili Piper said: “When we started we sold the house and decided to grow the company ourselves. So all the way until 2019 we bootstrapped. Tiger gave us a valuation that we expected to get at the end of this year, which will help us accelerate things much faster, so we couldn’t refuse it.”

Alina Vandenberghe, CPO, and Co-founder said: “We’re proud to have so many customers scheduling meetings and optimizing their calendars with Chili Piper’s Instant Booker.”

Since the pandemic hit, the husband-and-wife founded company has gone fully remote, with 93 employees in 81 cities and 21 countries.

John Curtius, Partner at Tiger Global said: “When we met Nicolas and Alina, we were fired up by their product vision and focus on customer happiness.”

TJ Nahigian, Managing Partner at Base10 Partners, added: “We originally invested in Chili Piper because we knew customers needed ways to add fire to how they connected with inbound leads. We’ve been absolutely blown away with the progress over the past year, 2020 has been a step-change for this company as business went remote.”

C2i, a genomics SaaS product to detect traces of cancer, raises $100M Series B

By Marcella McCarthy

If you or a loved one has ever undergone a tumor removal as part of cancer treatment, you’re likely familiar with the period of uncertainty and fear that follows. Will the cancer return, and if so, will the doctors catch it at an early enough stage? C2i Genomics has developed software that’s 100x more sensitive in detecting residual disease, and investors are pouncing on the potential. Today, C2i announced a $100 million Series B led by Casdin Capital. 

“The biggest question in cancer treatment is, ‘Is it working?’ Some patients are getting treatment they don’t benefit from and they are suffering the side effects while other patients are not getting the treatment they need,” said Asaf Zviran, co-founder and CEO of C2i Genomics in an interview.

Historically, the main approach to cancer detection post-surgery has been through the use of MRI or X-ray, but neither of those methods gets super accurate until the cancer progresses to a certain point. As a result, a patient’s cancer may return, but it may be a while before doctors are able to catch it.

Using C2i’s technology, doctors can order a liquid biopsy, which is essentially a blood draw that looks for DNA. From there they can sequence the entire genome and upload it to the C2i platform. The software then looks at the sequence and identifies faint patterns that indicate the presence of cancer, and can inform if it’s growing or shrinking.

“C2i is basically providing the software that allows the detection and monitoring of cancer to a global scale. Every lab with a sequencing machine can process samples, upload to the C2i platform and provide detection and monitoring to the patient,” Zviran told TechCrunch.

C2i Genomics’ solution is based on research performed at the New York Genome Center (NYGC) and Weill Cornell Medicine (WCM) by Dr. Zviran, along with Dr. Dan Landau, faculty member at the NYGC and assistant professor of medicine at WCM, who serves as scientific co-founder and member of C2i’s scientific advisory board. The research and findings have been published in the medical journal, Nature Medicine.

While the product is not FDA-approved yet, it’s already being used in clinical research and drug development research at NYU Langone Health, the National Cancer Center of Singapore, Aarhus University Hospital and Lausanne University Hospital.

When and if approved, New York-based C2i has the potential to drastically change cancer treatment, including in the areas of organ preservation. For example, some people have functional organs, such as the bladder or rectum, removed to prevent cancer from returning, leaving them disabled. But what if the unnecessary surgeries could be avoided? That’s one goal that Zviran and his team have their minds set on achieving.

For Zviran, this story is personal. 

“I started my career very far from cancer and biology, and at the age of 28 I was diagnosed with cancer and I went for surgery and radiation. My father and then both of my in-laws were also diagnosed, and they didn’t survive,” he said.

Zviran, who today has a PhD in molecular biology, was previously an engineer with the Israeli Defense Force and some private companies. “As an engineer, looking into this experience, it was very alarming to me about the uncertainty on both the patients’ and physicians’ side,” he said.

This round of funding will be used to accelerate clinical development and commercialization of the company’s C2-Intelligence Platform. Other investors that participated in the round include NFX, Duquesne Family Office, Section 32 (Singapore), iGlobe Partners and Driehaus Capital.

Bigeye (formerly Toro) scores $17M Series A to automate data quality monitoring

By Ron Miller

As companies create machine learning models, the operations team needs to ensure the data used for the model is of sufficient quality, a process that can be time consuming. Bigeye (formerly Toro), an early stage startup is helping by automating data quality.

Today the company announced a $17 million Series A led Sequoia Capital with participation from existing investor Costanoa Ventures. That brings the total raised to $21 million with the $4 million seed, the startup raised last May.

When we spoke to Bigeye CEO and co-founder Kyle Kirwan last May, he said the seed round was going to be focussed on hiring a team — they are 11 now — and building more automation into the product, and he says they have achieved that goal.

“The product can now automatically tell users what data quality metrics they should collect from their data, so they can point us at a table in Snowflake or Amazon Redshift or whatever and we can analyze that table and recommend the metrics that they should collect from it to monitor the data quality — and we also automated the alerting,” Kirwan explained.

He says that the company is focusing on data operations issues when it comes to inputs to the model such as the table isn’t updating when it’s supposed to, it’s missing rows or there are duplicate entries. They can automate alerts to those kinds of issues and speed up the process of getting model data ready for training and production.

Bogomil Balkansky, the partner at Sequoia who is leading today’s investment sees the company attacking an important part of the machine learning pipeline. “Having spearheaded the data quality team at Uber, Kyle and Egor have a clear vision to provide always-on insight into the quality of data to all businesses,” Balkansky said in a statement.

As the founding team begins building the company, Kirwan says that building a diverse team is a key goal for them and something they are keenly aware of.

“It’s easy to hire a lot of other people that fit a certain mold, and we want to be really careful that we’re doing the extra work to [understand that just because] it’s easy to source people within our network, we need to push and make sure that we’re hiring a team that has different backgrounds and different viewpoints and different types of people on it because that’s how we’re going to build the strongest team,” he said.

Bigeye offers on prem and SaaS solutions, and while it’s working with paying customers like Instacart, Crux Informatics, and Lambda School, the product won’t be generally available until later in the year.

How to pivot your startup, save cash and maintain trust with investors and customers

By Connie Loizos

A few years ago, founder Sean Lane thought he’d achieved product-market fit.

Speaking to attendees at TechCrunch’s Early Stage virtual event, Lane said Queue, a secure digital check-in tablet for hospital waiting rooms that reduced wait times by uniting and correcting electronic medical records, was “selling like hotcakes.” But once Lane realized it would only ever address one piece of a much bigger market opportunity, he sold off the product, laid off two-thirds of the people affiliated with it and redirected the employees who were left.

Lane explained that what he really wanted to build is what his company — since renamed Olive — has now become, a robotic process automation (RPA) company that takes on hospital workers’ most tedious tasks so nurses and physicians can spend more time with patients.

Customers seem to like it. According to Lane, more than 600 hospitals use the service to assist employees with tasks like prior authorizations and patient verifications.

Investors clearly approve of what Olive is selling, too: Last year, the company raised three rounds of funding totaling roughly $380 million and valuing the company at $1.5 billion. According to Crunchbase, it’s raised a total of $456 million altogether.

In fact, VCs think so much of Lane that in February, they invested $50 million in another company that Lane runs simultaneously called Circulo, a startup that describes itself as building the “Medicaid insurance company of the future.”

Still, the path from point A to B was painful, and it might not have happened if Lane didn’t have a few things going for him, including a deeply personal reason to build something that could have greater impact on the U.S. healthcare system.

Deepfake video app Avatarify, which processes on-phone, plans digital watermark for videos

By Mike Butcher

Making deepfake videos used to be hard. Now all you need is a smartphone. Avatarify, a startup that allows people to make deepfake videos directly on their phone rather than in the cloud, is soaring up the app charts after being used by celebrities such as Victoria Beckham.

However, the problem with many deepfake videos is that there is no digital watermark to determine that the video has been tampered with. So Avatarify says it will soon launch a digital watermark to prevent this from happening.

Run out of Moscow but with a U.S. HQ, Avatarify launched in July 2020 and since then has been downloaded millions of times. The founders say that 140 million deepfake videos were created with Avatarify this year alone. There are now 125 million views of videos with the hashtag #avatarify on TikTok. While its competitors include the well-funded Reface, Snapchat, Wombo.ai, Mug Life and Xpression, Avatarify has yet to raise any money beyond an angel round.

Despite taking only $120,000 in angel funding, the company has yet to accept any venture capital and says it has bootstrapped its way from zero to almost 10 million downloads and claims to have a $10 million annual run rate with a team of less than 10 people.

It’s not hard to see why. Avatarify has a freemium subscription model. They offer a 7-day free trial and a 12-month subscription for $34.99 or a weekly plan for $2.49. Without a subscription, they offer the core features of the app for free, but videos then carry a visible watermark.

The founders also say the app protects privacy, because the videos are processed directly on the phone, rather than in the cloud where they could be hacked.

Avatarify processes user’s photos and turns them into short videos by animating faces, using machine learning algorithms and adding sounds. The user chooses a picture they want to animate, chooses the effects and music, and then taps to animate the picture. This short video can then be posted on Instagram or TikTok.

The Avatarify videos are taking off on TikTok because teens no longer need to learn a dance or be much more creative than finding a photo of a celebrity to animate to.

Avartify says you can’t use their app to impersonate someone, but there is of course no way to police this.

Co-founders Ali Aliev and Karim Iskakov wrote the app during the COVID-19 lockdown in April 2020. Ali spent two hours writing a program in Python to transfer his facial expressions to the other person’s face and use a filter in Zoom. The result was a real-time video, which could be streamed to Zoom. He joined a call with Elon Mask’s face and everyone on the call was shocked. The team posted the video, which then went viral.

They posted the code on Github and immediately saw the number of downloads grow. The repository was published on 6 April 2020, and as of 19 March 2021 had been downloaded 50,000 times.

Ali left his job at Samsung AI Centre and devoted himself to the app. After Avatarify’s iOS app was released on 28 June 2020, viral videos on TikTok, created with the app, led it to App Store’s top charts without paid acquisition. In February 2021, Avatarify was ranked first among Top Free Apps worldwide. Between February and March, the app 2021 generated more than $1 million in revenue (Source: AppMagic).

However, despite Avartify’s success, the ongoing problems with deepfake videos remain, such as using these apps to make nonconsensual porn, using the faces of innocent people.

Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge

By Natasha Lomas

Labor activists challenging Uber over what they allege are ‘robo-firings’ of drivers in Europe have trumpeted winning a default judgement in the Netherlands — where the Court of Amsterdam ordered the ride-hailing giant to reinstate six drivers who the litigants claim were unfairly terminated “by algorithmic means”.

The court also ordered Uber to pay the fired drivers compensation.

The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protects for individuals against purely automated decisions with a legal or significant impact.

The activists say this is the first time a court has ordered the overturning of an automated decision to dismiss workers from employment.

However the judgement, which was issued on February 24, was issued by default — and Uber says it was not aware of the case until last week, claiming that was why it did not contest it (nor, indeed, comply with the order).

It had until March 29 to do so, per the litigants, who are being supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE).

Uber argues the default judgement was not correctly served and says it is now making an application to set the default ruling aside and have its case heard “on the basis that the correct procedure was not followed”.

It envisages the hearing taking place within four weeks of its Dutch entity, Uber BV, being made aware of the judgement — which it says occurred on April 8.

“Uber only became aware of this default judgement last week, due to representatives for the ADCU not following proper legal procedure,” an Uber spokesperson told TechCrunch.

A spokesperson for WIE denied that correct procedure was not followed but welcomed the opportunity for Uber to respond to questions over how its driver ID systems operate in court, adding: “They [Uber] are out of time. But we’d be happy to see them in court. They will need to show meaningful human intervention and provide transparency.”

Uber pointed to a separate judgement by the Amsterdam Court last month — which rejected another ADCU- and WIE-backed challenge to Uber’s anti-fraud systems, with the court accepting its explanation that algorithmic tools are mere aids to human ‘anti-fraud’ teams who it said take all decisions on terminations.

“With no knowledge of the case, the Court handed down a default judgement in our absence, which was automatic and not considered. Only weeks later, the very same Court found comprehensively in Uber’s favour on similar issues in a separate case. We will now contest this judgement,” Uber’s spokesperson added.

However WIE said this default judgement ‘robo-firing’ challenge specifically targets Uber’s Hybrid Real Time ID System — a system that incorporates facial recognition checks and which labor activists recently found mis-identifying drivers in a number of instances.

It also pointed to a separate development this week in the UK where it said the City of London Magistrates Court ordered the city’s transport regulator, TfL, to reinstate the licence of one of the drivers revoked after Uber routinely notified it of a dismissal (also triggered by Uber’s real time ID system, per WIE).

Reached for comment on that, a TfL spokesperson said: “The safety of the travelling public is our top priority and where we are notified of cases of driver identity fraud, we take immediate licensing action so that passenger safety is not compromised. We always require the evidence behind an operator’s decision to dismiss a driver and review it along with any other relevant information as part of any decision to revoke a licence. All drivers have the right to appeal a decision to remove a licence through the Magistrates’ Court.”

The regulator has been applying pressure to Uber since 2017 when it took the (shocking to Uber) decision to revoke the company’s licence to operate — citing safety and corporate governance concerns.

Since then Uber has been able to continue to operate in the UK capital but the company remains under pressure to comply with a laundry list of requirements set by TfL as it tries to regain a full operator licence.

Commenting on the default Dutch judgement on the Uber driver terminations in a statement, James Farrar, director of WIE, accused gig platforms of “hiding management control in algorithms”.

“For the Uber drivers robbed of their jobs and livelihoods this has been a dystopian nightmare come true,” he said. “They were publicly accused of ‘fraudulent activity’ on the back of poorly governed use of bad technology. This case is a wake-up call for lawmakers about the abuse of surveillance technology now proliferating in the gig economy. In the aftermath of the recent UK Supreme Court ruling on worker rights gig economy platforms are hiding management control in algorithms. This is misclassification 2.0.”

In another supporting statement, Yaseen Aslam, president of the ADCU, added: “I am deeply concerned about the complicit role Transport for London has played in this catastrophe. They have encouraged Uber to introduce surveillance technology as a price for keeping their operator’s license and the result has been devastating for a TfL licensed workforce that is 94% BAME. The Mayor of London must step in and guarantee the rights and freedoms of Uber drivers licensed under his administration.”  

When pressed on the driver termination challenge being specifically targeted at its Hybrid Real-Time ID system, Uber declined to comment in greater detail — claiming the case is “now a live court case again”.

But its spokesman suggested it will seek to apply the same defence against the earlier ‘robo-firing’ charge — when it argued its anti-fraud systems do not equate to automated decision making under EU law because “meaningful human involvement [is] involved in decisions of this nature”.

 

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

By Natasha Lomas

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

❌