TechCrunch Sessions: Robotics + AI brings together a wide group of the ecosystem’s leading minds on March 3 at UC Berkeley. Over 1000+ attendees are expected from all facets of the robotics and artificial intelligence space – investors, students, engineerings, C-levels, technologists, and researchers. We’ve compiled a small list of highlights of attendees’ companies and job titles attending this year’s event below.
STUDENTS & RESEARCHERS FROM:
Did you know that TechCrunch provides a white-glove networking app at all our events called CrunchMatch? You can connect and match with people who meet your specific requirements, message them, and connect right at the conference. How cool is that!?
Want to get in on networking with this caliber of people? Book your $345 General Admission ticket today and save $50 before prices go up at the door. But no one likes going to events alone. Why not bring the whole team? Groups of four or more save 15% on tickets when you book here.
We’ve been dropping into the Australian startup scene increasingly over the years as the ecosystem has been building at an increasingly faster pace, most notably at our own TechCrunch Battlefield Australia in 2017. Further evidence that the scene is growing has come recently in the shape of the Pause Fest conference in Melbourne. This event has gone from strength to strength in recent years and is fast becoming a must-attend for Aussie startups aiming for both national international attention.
I was able to drop in ‘virtually’ to interview a number of those showcased in the Startup Pitch Competition, so here’s a run-down of some of the stand-out companies.
Medinet Australia is a health tech startup aiming to make healthcare more convenient and accessible to Australians by allowing doctors to do consultations with patients via an app. Somewhat similar to apps like Babylon Health, Medinet’s telehealth app allows patients to obtain clinical advice from a GP remotely; access prescriptions and have medications delivered; access pathology results; directly email their medical certificate to their employer; and access specialist referrals along with upfront information about specialists such as their fees, waitlist, and patient experience. They’ve raised $3M in Angel financing and are looking for institutional funding in due course. Given Australia’s vast distances, Medinet is well-placed to capitalize on the shift of the population towards much more convenient telehealth apps. (1st Place Winner)
Everty allows companies to easily manage, monitor and monetize Electric Vehicle charging stations. But this isn’t about infrastructure. Instead, they link up workplaces and accounting systems to the EV charging network, thus making it more like a “Salesforce for EV charging”. It’s available for both commercial and home charging tracking. It’s also raised an Angel round and is poised to raise further funding. (2nd Place Winner)
AI On Spectrum
It’s a sad fact that people with Autism statistically tend to die younger, and unfortunately, the suicide rate is much higher for Autistic people. “Ai on Spectrum” takes an accessible approach in helping autistic kids and their families find supportive environments and feel empowered. The game encourages Autism sufferers to explore their emotional side and arms them with coping strategies when times get tough, applying AI and machine learning in the process to assist the user. (3rd Place Winner)
Professional bee-keepers need a fast, reliable, easy-to-use record keeper for their bees and this startup does just that. But it’s also developing a software+sensor technology to give beekeepers more accurate analytics, allowing them to get an early-warning about issues and problems. Their technology could even, in the future, be used to alert for coming bushfires by sensing the changed behavior of the bees. (Hacker Exchange Additional Winner)
Rechargeable batteries for things like cars can be re-used again, but the key to employing them is being able to extend their lives. Relectrify says its battery control software can unlock the full performance from every cell, increasing battery cycle life. It will also reduce storage costs by providing AC output without needing a battery inverter for both new and 2nd-life batteries. Its advanced battery management system combines power and electric monitoring to rapidly the check which are stronger cells and which are weaker making it possible to get as much as 30% more battery life, as well as deploying “2nd life storage”. So far, they have a project with Nissan and American Electric Power and have raised a Series A of $4.5M. (SingularityU Additional Winner)
Sadly, seniors and patients can contract bedsores if left too long. People can even die from bedsores. Furthermore, hospitals can end up in litigation over the issue. What’s needed is a technology that can prevent this, as well as predicting where on a patient’s body might be worst affected. That’s what Gabriel has come up with: using multi-modal technology to prevent and detect both falls and bedsores. Its passive monitoring technology is for the home or use in hospitals and consists of a resistive sheet with sensors connecting to a system which can understand the pressure on a bed. It has FDA approval, is patent-pending and is already working in some Hawaiin hospitals. It’s so far raised $2m in Angel and is now raising money.
Here’s a taste of Pause Fest:
Six months ago or thereabouts, a group of engineers and developers with backgrounds from the National Security Agency, Google and Amazon Web Services had an idea.
Data is valuable for helping developers and engineers to build new features and better innovate. But that data is often highly sensitive and out of reach, kept under lock and key by red tape and compliance, which can take weeks to get approval. So, the engineers started Gretel, an early-stage startup that aims to help developers safely share and collaborate with sensitive data in real time.
It’s not as niche of a problem as you might think, said Alex Watson, one of the co-founders. Developers can face this problem at any company, he said. Often, developers don’t need full access to a bank of user data — they just need a portion or a sample to work with. In many cases, developers could suffice with data that looks like real user data.
“It starts with making data safe to share,” Watson said. “There’s all these really cool use cases that people have been able to do with data.” He said companies like GitHub, a widely used source code sharing platform, helped to make source code accessible and collaboration easy. “But there’s no GitHub equivalent for data,” he said.
And that’s how Watson and his co-founders, John Myers, Ali Golshan and Laszlo Bock came up with Gretel.
“We’re building right now software that enables developers to automatically check out an anonymized version of the data set,” said Watson. This so-called “synthetic data” is essentially artificial data that looks and works just like regular sensitive user data. Gretel uses machine learning to categorize the data — like names, addresses and other customer identifiers — and classify as many labels to the data as possible. Once that data is labeled, it can be applied access policies. Then, the platform applies differential privacy — a technique used to anonymize vast amounts of data — so that it’s no longer tied to customer information. “It’s an entirely fake data set that was generated by machine learning,” said Watson.
It’s a pitch that’s already gathering attention. The startup has raised $3.5 million in seed funding to get the platform off the ground, led by Greylock Partners, and with participation from Moonshots Capital, Village Global and several angel investors.
“At Google, we had to build our own tools to enable our developers to safely access data, because the tools that we needed didn’t exist,” said Sridhar Ramaswamy, a former Google executive, and now a partner at Greylock.
Gretel said it will charge customers based on consumption — a similar structure to how Amazon prices access to its cloud computing services.
“Right now, it’s very heads-down and building,” said Watson. The startup plans to ramp up its engagement with the developer community in the coming weeks, with an eye on making Gretel available in the next six months, he said.
Yellow, the accelerator program launched by Snap in 2018, has selected ten companies to join its latest cohort.
The new batch of startups coming from across the U.S. and international cities like London, Mexico City, Seoul and Vilnius are building professional social networks for black professionals and blue collar workers, fashion labels, educational tools in augmented reality, kids entertainment, and an interactive entertainment production company.
The list of new companies include:
The latest cohort from Snap’s Yellow accelerator
Since launching the platform in 2018, startups from the Snap accelerator have gone on to acquisition (like Stop, Breathe, and Think, which was bought by Meredith Corp.) and to raise bigger rounds of funding (like the voiceover video production toolkit, MuzeTV, and the animation studio Toonstar).
Every company in the Yellow portfolio will receive $150,000 mentorship from industry veterans in and out of Snap, creative office space in Los Angeles and commercial support and partnerships — including Snapchat distribution.
TechCrunch is returning to U.C. Berkeley on March 3 to bring together some of the most influential minds in robotics and artificial intelligence. Each year we strive to bring together a cross-section of big companies and exciting new startups, along with top researchers, VCs and thinkers.
In addition to a main stage that includes the likes of Amazon’s Tye Brady, U .C. Berkeley’s Stuart Russell, Anca Dragan of Waymo, Claire Delaunay of NVIDIA, James Kuffner of Toyota’s TRI-AD, and a surprise interview with Disney Imagineers, we’ll also be offering a more intimate Q&A stage featuring speakers from SoftBank Robotics, Samsung, Sony’s Innovation Fund, Qualcomm, NVIDIA and more.
Alongside a selection of handpicked demos, we’ll also be showcasing the winners from our first-ever pitch-off competition for early-stage robotics companies. You won’t get a better look at exciting new robotics technologies than that. Tickets for the event are still available. We’ll see you in a couple of weeks at Zellerbach Hall.
8:30 AM – 4:00 PM
Registration Open Hours
General Attendees can pick up their badges starting at 8:30 am at Lower Sproul Plaza located in front of Zellerbach Hall. We close registration at 4:00 pm.
10:00 AM – 10:05 AM
10:05 AM – 10:25 AM
The UC Berkeley professor and AI authority argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
10:25 AM – 10:45 AM
Maxar Technologies has been involved with U.S. space efforts for decades, and is about to send its sixth (!) robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian is general manager of robotics at Maxar and will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.
10:45 AM – 11:05 AM
Amazon Robotics’ chief technology officer will discuss how the company is using the latest in robotics and AI to optimize its massive logistics. He’ll also discuss the future of warehouse automation and how humans and robots share a work space.
11:05 AM – 11:15 AM
Live Demo from the Stanford Robotics Club
11:30 AM – 12:00 PM
Join one of the foremost experts in artificial intelligence as he signs copies of his acclaimed new book, Human Compatible.
11:35 AM – 12:05 PM
Can robots help us build structures faster, smarter and cheaper? Built Robotics makes a self-driving excavator. Toggle is developing a new fabrication of rebar for reinforced concrete, Dusty builds robot-powered tools and longtime robotics pioneers Boston Dynamics have recently joined the construction space. We’ll talk with the founders and experts from these companies to learn how and when robots will become a part of the construction crew.
12:15 PM – 1:00 PM
Join this interactive Q&A session on the breakout stage with three of the top minds in corporate VC.
1:00 PM – 1:25 PM
Select, early-stage companies, hand-picked by TechCrunch editors, will take the stage and have five minutes to present their wares.
1:15 PM – 2:00 PM
Your chance to ask questions of some of the most successful robotics founders on our stage
1:25 PM – 1:50 PM
Leading investors will discuss the rising tide of venture capital funding in robotics and AI. The investors bring a combination of early-stage investing and corporate venture capital expertise, sharing a fondness for the wild world of robotics and AI investing.
1:50 PM – 2:15 PM
As robots become an ever more meaningful part of our lives, interactions with humans are increasingly inevitable. These experts will discuss the broad implications of HRI in the workplace and home.
2:15 PM – 2:40 PM
Autonomous driving is set to be one of the biggest categories for robotics and AI. But there are plenty of roadblocks standing in its way. Experts will discuss how we get there from here.
2:15 PM – 3:00 PM
Join this interactive Q&A session on the breakout stage with some of the greatest investors in robotics and AI
Imagineers from Disney will present start of the art robotics built to populate its theme parks.
3:10 PM – 3:35 PM
This summer’s Tokyo Olympics will be a huge proving ground for Toyota’s TRI-AD. Executive James Kuffner and Max Bajracharya will join us to discuss the department’s plans for assistive robots and self-driving cars.
3:15 PM – 4:00 PM
Join this interactive Q&A session on the breakout stage with some of the greatest engineers in robotics and AI.
3:35 PM – 4:00 PM
In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. One hundred years later, in the context of increasing inequality and xenophobia, the panelists will discuss cultural views of robots in the context of “Robo-Exoticism,” which exaggerates both negative and positive attributes and reinforces old fears, fantasies and stereotypes.
4:00 PM – 4:10 PM
Live Demo from Somatic
4:10 PM – 4:35 PM
Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI will discuss what we’re doing about it and what still needs to be done.
4:35 PM – 5:00 PM
The benefits of robotics in agriculture are undeniable, yet at the same time only getting started. Lewis Anderson (Traptic) and Sebastien Boyer (FarmWise) will compare notes on the rigors of developing industrial-grade robots that both pick crops and weed fields respectively, and Pyka’s Michael Norcia will discuss taking flight over those fields with an autonomous crop-spraying drone.
5:00 PM – 5:25 PM
Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.
5:30 PM – 7:30 PM
Unofficial After Party, (Cash Bar Only)
Come hang out at the unofficial After Party at Tap Haus, 2518 Durant Ave, Ste C, Berkeley
We only have so much space in Zellerbach Hall and tickets are selling out fast. Grab your General Admission Ticket right now for $350 and save 50 bucks as prices go up at the door.
Student tickets are just $50 and can be purchased here. Student tickets are limited.
Startup Exhibitor Packages are sold out!
With the days of desert-themed releases officially behind it, Google today announced the first developer preview of Android 11, which is now available as system images for Google’s own Pixel devices, starting with the Pixel 2.
As of now, there is no way to install the updates over the air. That’s usually something the company makes available at a later stage. These first releases aren’t meant for regular users anyway. Instead, they are a way for developers to test their applications and get a head start on making use of the latest features in the operating system.
“With Android 11 we’re keeping our focus on helping users take advantage of the latest innovations, while continuing to keep privacy and security a top priority,” writes Google VP of Engineering Dave Burke. “We’ve added multiple new features to help users manage access to sensitive data and files, and we’ve hardened critical areas of the platform to keep the OS resilient and secure. For developers, Android 11 has a ton of new capabilities for your apps, like enhancements for foldables and 5G, call-screening APIs, new media and camera capabilities, machine learning, and more.”
Unlike some of Google’s previous early previews, this first version of Android 11 does actually bring quite a few new features to the table. As Burke noted, there are some obligatory 5G features like a new bandwidth estimate API, for example, as well as a new API that checks whether a connection is unmetered so apps can play higher resolution video, for example.
With Android 11, Google is also expanding its Project Mainline lineup of updatable modules from 10 to 22. With this, Google is able to update critical parts of the operating system without having to rely on the device manufacturers to release a full OS update. Users simply install these updates through the Google Play infrastructure.
Users will be happy to see that Android 11 will feature native support for waterfall screens that cover a device’s edges, using a new API that helps developers manage interactions near those edges.
Also new are some features that developers can use to handle conversational experiences, including a dedicated conversation section in the notification shade, as well as a new chat bubbles API and the ability to insert images into replies you want to send from the notifications pane.
Unsurprisingly, Google is adding a number of new privacy and security features to Android 11, too. These include one-time permissions for sensitive types of data, as well as updates to how the OS handles data on external storage, which it first previewed last year.
As for security, Google is expanding its support for biometrics and adding different levels of granularity (strong, weak and device credential), in addition to the usual hardening of the platform you would expect from a new release.
There are plenty of other smaller updates as well, including some that are specifically meant to make running machine learning applications easier, but Google specifically highlights the fact that Android 11 will also bring a couple of new features to the OS that will help IT manage corporate devices with enhanced work profiles.
This first developer preview of Android 11 is launching about a month earlier than previous releases, so Google is giving itself a bit more time to get the OS ready for a wider launch. Currently, the release schedule calls for monthly developer preview releases until April, followed by three betas and a final release in Q3 2020.
As cybercrime continues to evolve and expand, a startup that is building a business focused on endpoint security has raised a big round of funding. SentinelOne — which provides a machine learning-based solution for monitoring and securing laptops, phones, containerised applications and the many other devices and services connected to a network — has picked up $200 million, a Series E round of funding that it says catapults its valuation to $1.1 billion.
The funding is notable not just for its size but for its velocity: it comes just eight months after SentinelOne announced a Series D of $120 million, which at the time valued the company around $500 million. In other words, the company has more than doubled its valuation in less than a year — a sign of the cybersecurity times.
This latest round is being led by Insight Partners, with Tiger Global Management, Qualcomm Ventures LLC, Vista Public Strategies of Vista Equity Partners, Third Point Ventures, and other undisclosed previous investors all participating.
Tomer Weingarten, CEO and co-founder of the company, said in an interview that while this round gives SentinelOne the flexibility to remain in “startup” mode (privately funded) for some time — especially since it came so quickly on the heels of the previous large round — an IPO “would be the next logical step” for the company. “But we’re not in any rush,” he added. “We have one to two years of growth left as a private company.”
While cybercrime is proving to be a very expensive business (or very lucrative, I guess, depending on which side of the equation you sit on), it has also meant that the market for cybersecurity has significantly expanded.
Endpoint security, the area where SentinelOne concentrates its efforts, last year was estimated to be around an $8 billion market, and analysts project that it could be worth as much as $18.4 billion by 2024.
Driving it is the single biggest trend that has changed the world of work in the last decade. Everyone — whether a road warrior or a desk-based administrator or strategist, a contractor or full-time employee, a front-line sales assistant or back-end engineer or executive — is now connected to the company network, often with more than one device. And that’s before you consider the various other “endpoints” that might be connected to a network, including machines, containers and more. The result is a spaghetti of a problem. One survey from LogMeIn, disconcertingly, even found that some 30% of IT managers couldn’t identify just how many endpoints they managed.
“The proliferation of devices and the expanding network are the biggest issues today,” said Weingarten. “The landscape is expanding and it is getting very hard to monitor not just what your network looks like but what your attackers are looking for.”
This is where an AI-based solution like SentinelOne’s comes into play. The company has roots in the Israeli cyberintelligence community but is based out of Mountain View, and its platform is built around the idea of working automatically not just to detect endpoints and their vulnerabilities, but to apply behavioral models, and various modes of protection, detection and response in one go — in a product that it calls its Singularity Platform that works across the entire edge of the network.
“We are seeing more automated and real-time attacks that themselves are using more machine learning,” Weingarten said. “That translates to the fact that you need defence that moves in real time as with as much automation as possible.”
But nonetheless, its product has seen strong uptake to date. It currently has some 3,500 customers, including three of the biggest companies in the world, and “hundreds” from the global 2,000 enterprises, with what it says has been 113% year-on-year new bookings growth, revenue growth of 104% year-on-year, and 150% growth year-on-year in transactions over $2 million. It has 500 employees today and plans to hire up to 700 by the end of this year.
One of the key differentiators is the focus on using AI, and using it at scale to help mitigate an increasingly complex threat landscape, to take endpoint security to the next level.
“Competition in the endpoint market has cleared with a select few exhibiting the necessary vision and technology to flourish in an increasingly volatile threat landscape,” said Teddie Wardi, MD of Insight Partners, in a statement. “As evidenced by our ongoing financial commitment to SentinelOne along with the resources of Insight Onsite, our business strategy and ScaleUp division, we are confident that SentinelOne has an enormous opportunity to be a market leader in the cybersecurity space.”
Weingarten said that SentinelOne “gets approached every year” to be acquired, although he didn’t name any names. Nevertheless, that also points to the bigger consolidation trend that will be interesting to watch as the company grows. SentinelOne has never made an acquisition to date, but it’s hard to ignore that, as the company to expand its products and features, that it might tap into the wider market to bring in other kinds of technology into its stack.
“There are definitely a lot of security companies out there,” Weingarten noted. “Those that serve a very specific market are the targets for consolidation.”
European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission President Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a “scramble for AI,” with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is a key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU
— European Commission (@EU_Commission) February 18, 2020
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton
Adobe’s Photoshop celebrates its 30th birthday today. Over that time, the app has pretty much become synonymous with photo editing and there will surely be plenty of retrospectives. But to look ahead, Adobe also today announced a number of updates to both the desktop and mobile Photoshop experiences.
The marquee feature here is probably the addition of the Object Selection tool in Photoshop on the iPad. It’s no secret that the original iPad app wasn’t exactly a hit with users as it lacked a number of features Photoshop users wanted to see on mobile. Since then, the company made a few changes to the app and explained some of its decisions in greater detail. Today, Adobe notes, 50 percent of reviews give the app five stars and the app has been downloaded more than 1 million times since November.
With the Object Selection tool, which it first announced for the desktop version three months ago, Adobe is now bringing a new selection tool to Photoshop that is specifically meant to allow creatives to select and manipulate one or multiple objects in complex scenes. Using the company’s Sensei AI technology and machine learning, it gives users a lot of control over the selection process, even if you only draw a crude outline around the area you are trying to select.
Also new on the iPad are additional controls for typesetting. For now, this means tracking, leading and scaling, as well as formatting options like all caps, small caps, superscript and subscript.
On the desktop, Adobe is bringing improvements to the content-aware fil workspace to the app, as well as a much-improved lens blur feature that mimics the bokeh effect of taking an image with a shallow depth of field. Previously, the lens blur feature ran on the CPU and looked somewhat unrealistic, with sharp edges around out-of-focus foreground objects. Now, the algorithm runs on the GPU, making it far softer and foreground objects have a far more realistic look.
As for the improved content-aware fill workspace, Adobe notes that you can now make multiple selections and apply multiple fills at the same time. This isn’t exactly a revolutionary new feature, but it’s a nice workflow improvement for those who often use this tool.
Cape Town based startup Zindi has registered 10,000 data-scientists on its platform that uses AI and machine learning to crowdsolve complex problems in Africa.
Founded in 2018, the early-stage venture allows companies, NGOs or government institutions to host online competitions around data-oriented challenges.
Zindi opens the contests to the African data scientists on its site who can join a competition, submit solution sets, move up a leader board and win — for a cash prize payout.
The highest purse so far has been $12,000, according to Zindi co-founder Celina Lee. Competition hosts receive the results, which they can use to create new products or integrate into their existing systems and platforms.
It’s free for data scientists to create a profile on the site, but those who fund the competitions pay Zindi a fee, which is how the startup generates revenue.
The South African National Roads Agency sponsored a challenge in 2019 to reduce traffic fatalities in South Africa. The stated objective is “to build a machine learning model that accurately predicts when and where the next road incident will occur in Cape Town… to enable South African authorities… to put measures in place that will… ensure safety.”
Attaining 10,000 registered data-scientists represents a more than 100 percent increase for Zindi since August 2019, when TechCrunch last spoke to Lee.
The startup — which is in the process of raising a Series A funding round — plans to connect its larger roster to several new platform initiatives. Zindi will launch a university wide hack-competition, called UmojoHack Africa, across 10 countries in March.
“We’re also working on a section on our site that is specifically designed to run hackathons…something that organizations and universities could use to upskill their students or teams specifically,” Lee said.
Lee (who’s originally from San Francisco) co-founded Zindi with South African Megan Yates and Ghanaian Ekow Duker. They lead a team in the company’s Cape Town office.
For Lee, the startup is a merger of two facets of her experience.
“It all just came together. I have this math-y tech background, and I was working in non-profits and development, but I’d always been trying to join the two worlds,” she said.
That happened with Zindi, which is for-profit — though roughly 80% of the startup’s competitions have some social impact angle, according to Lee.
“In an African context, solving problems for for-profit companies can definitely have social impact as well,” she said.
With most of the continent’s VC focused on fintech or e-commerce startups, Zindi joins a unique group of ventures — such as Andela and Gebeya — that are building tech-talent in Africa’s data-scientist and software engineer space.
If Zindi can convene data-scientists to solve problems for companies and governments across the entire continent that could open up a vast addressable market.
It could also see the startup become an alternative to more expensive consulting firms operating in Africa’s large economies, such as South Africa, Nigeria and Kenya .
Ben Tarnoff is a columnist at The Guardian, a co-founder of tech ethics magazine Logic and arguably one of the world’s top experts on the intersection of tech and socialism.
But what I think you really need to know by way of introduction to the interview below is that reading Tarnoff and his wife Moira Weigel might be the closest you can get today to following the young Jean Paul Sartre and Simone de Beauvoir in real time.
In September, Tarnoff published a Guardian piece, “To decarbonize we must decomputerize,” in which he argued for a modern Luddism. I’ve casually called myself a Luddite online for many years now:
*Sigh* how have I still tweeted fewer than 1300 times after 4 years on this thing? #firstworldproblemsforluddites
— Greg Epstein (@gregmepstein) May 17, 2013
But I wouldn’t previously have considered writing much about it online, because who in this orbit could possibly identify? Turns out Tarnoff, a leading tech world advocate for Bernie Sanders, does. Which made me wonder: Could Luddism ever become the next trend in Silicon Valley culture?
Of course, I then reviewed exactly who the Luddites actually were and thought, “aha.” Maybe I’ve finally found the topic and the interview that really truly will get me fired from my role as TechCrunch’s ethicist-in-residence; talking to a contemporary tech socialist about the people who famously destroyed machinery because they didn’t feel that it was ethical, humane or in service of their well-being doesn’t necessarily scream “TechCrunch,” does it?
So I began my interview by praising not only his piece on Luddism but several other related pieces he’s written and by asking (with tongue only semi-in-cheek) to please confirm that at least it’s a peaceful Luddism for which he is calling.
Ben Tarnoff (Photo by Richard McBlane/Getty Images for SXSW)
Tarnoff: Thanks for reading the pieces. I really appreciate it.
Riot Ventures, the Los Angeles-based, early-stage and deep technology investment firm is going out to market to raise a $75 million second fund to finance the development of startups in LA and beyond, according to fundraising documents viewed by TechCrunch.
The firm has largely flown under the radar, but it has been investing in startups applying innovations in automation, artificial intelligence, computer vision, computational biology, material sciences and robotics to industrial products and processes for the past two years.
Its first fund was a modest $10 million vehicle that the firm’s co-founders, Stephen Marcus and Will Coffield, raised to test the thesis their fledgling fund was exploring. Chiefly, they thought that robotics and machine learning were going to transform everything from aerospace to industrial manufacturing and retail, and they saw Los Angeles as a unique location from which to deploy capital.
Since the initial fund launched in 2017, the companies in Riot’s portfolio — including a number of later-stage special purpose investments made in companies like the point-of-sale tablet manufacturer Toast; the metal 3D printing equipment manufacturer Desktop Metal; and Shield AI, a stealthy drone company that works in the defense industry — are now worth roughly $16 billion.
In all, Riot has invested around $60 million through its direct investments and special purpose vehicles. But it’s not the capital that sets the firm apart, according to the pitch deck viewed by TechCrunch.
Marcus has a long background in angel investing and company creation. He’s a six-time serial entrepreneur whose sold telecom companies to acquirers like American Tower, Sprint and National Grid. Meanwhile, Coffield has spent the past several years building out a network in Los Angeles and eight years in the venture capital industry.
However, the firm places its emphasis on its newest partner, Jenna Bryant, a recruiter who spent the past years building out teams for some of the biggest names in the Los Angeles technology and entertainment industries, including Walt Disney Co., Oculus, Snap, Tinder and others.
“We actively recruit for our portfolio companies which enables us to meet a large swath of highly technical people,” the firm writes in its pitch deck. “We use this pool to win deals, make our companies more valuable, and find future hard tech founders. This is a core asset and function for our firm led by our Partner Jenna Bryant.”
Just as important as its recruitment practice is its position in Los Angeles, which is emerging as a hotbed for talent in robotics, rocketry, drones and defense. That’s borne out by investments in companies like Shield AI and Elementary Robotics — two companies in the Riot portfolio based in Southern California.
A report into the use of artificial intelligence by the U.K.’s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens’ lives.
Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare — with health minister Matt Hancock setting out a tech-fueled vision of “preventative, predictive and personalised care” in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of “healthtech” apps and services.
Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology — and London’s Met Police switching over to a live deployment of the AI technology just last month.
However the rush by cash-strapped public services to tap AI “efficiencies” risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns — all of which require transparency into AIs if there’s to be accountability over automated outcomes.
The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.
Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.
The court objected to a lack of transparency about how the system functions, as well as the associated lack of controlability — ordering an immediate halt to its use.
The U.K. parliamentary committee that reviews standards in public life has today sounded a similar warning — publishing a series of recommendations for public-sector use of AI and warning that the technology challenges three key principles of service delivery: openness, accountability and objectivity.
“Under the principle of openness, a current lack of information about government use of AI risks undermining transparency,” it writes in an executive summary.
“Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.”
“This review found that the government is failing on openness,” it goes on, asserting that: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”
In 2018, the UN’s special rapporteur on extreme poverty and human rights raised concerns about the U.K.’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale — warning then that the impact of a digital welfare state on vulnerable people would be “immense,” and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.
Per the committee’s assessment, it is “too early to judge if public sector bodies are successfully upholding accountability.”
Parliamentarians also suggest that “fears over ‘black box’ AI… may be overstated” — and rather dub “explainable AI” a “realistic goal for the public sector.”
On objectivity, they write that data bias is “an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias.”
The use of AI in the U.K. public sector remains limited at this stage, according to the committee’s review, with healthcare and policing currently having the most developed AI programmes — where the tech is being used to identify eye disease and predict reoffending rates, for example.
“Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage,” the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are “examining how AI can increase efficiency in service delivery.”
It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care — noting the example of Hampshire County Council trialing the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers, and points to a Guardian article which reported that one-third of U.K. councils use algorithmic systems to make welfare decisions.
But the committee suggests there are still “significant” obstacles to what they describe as “widespread and successful” adoption of AI systems by the U.K. public sector.
“Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation,” it writes. “It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.”
The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.
“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” it suggests.
Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. “All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery,” the committee writes.
Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI — with the committee noting there are three sets of principles that could apply to the public sector, which is generating confusion.
“The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use,” it recommends.
It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies’ use of AI complies with the U.K. Equality Act 2010.
The committee is not recommending a new regulator should be created to oversee AI — but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.
It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI — supporting the government’s intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalisation.)
Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that “ensure that private companies developing AI solutions for the public sector appropriately address public standards.”
“This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements,” it suggests.
Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of “driving blind, with no control over who is in the AI driving seat.”
“This serious report sadly confirms what we know to be the case — that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector,” she said. “The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.
“Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”
Deepnote, a startup that offers data scientists an IDE-like collaborative online experience for building their machine learning models, today announced that it has raised a $3.8 million seed round led by Index Ventures and Accel, with participation from YC and Credo Ventures, as well as a number of angel investors, including OpenAI’s Greg Brockman, Figma’s Dylan Field, Elad Gil, Naval Ravikant, Daniel Gross and Lachy Groom.
Built around standard Jupyter notebooks, Deepnote wants to provide data scientists with a cloud-based platform that allows them to focus on their work by abstracting away all of the infrastructure. So instead of having to spend a few hours setting up their environment, a student in a data science class, for example, can simply come to Deepnote and get started.
In its current form, Deepnote doesn’t charge for its service, despite the fact that it allows its users to work with large data sets and train their models on cloud-based machines with attached GPUs.
As Deepnote co-founder and CEO (and ex-Mozilla engineer) Jakub Jurových told me, though, he believes that the most important feature of the service is its ability to allow users to collaborate. “Over the past couple of years, I started to do a lot of data science work and helped a couple of companies scale up their data science teams,” he said. “And again and again, we run into the same issue: people have real trouble collaborating.”
Jurových argues that while it’s easy enough to keep two or three data scientists in sync, once you have a bigger team, you quickly run into issues because the current set of tools was never meant to do this kind of work. “If I’m a data scientist by training, I spend most of my time doing math and stats,” he said. “But then, expecting me to connect to an EC2 cluster and spin a bunch of GPU instances for parallel training is just not something I’m looking for.”
When it started this project in early 2019, the Deepnote team decided to put Jupyter notebooks at the core of the user experience. That is, after all, what most data scientists are already familiar with. It then built the collaborative features around that, as well as tools for pulling in data from third-party services and scheduling tools for kicking off jobs inside of the platform at regular intervals.
Deepnote is already quite popular with students. Jurových also noted that a lot of teachers already use Deepnote to publish interactive exercises for their students. Over time, the company obviously wants to bring more businesses on board, but for the time being, it is mostly focused on building its product. Given its collaborative nature, the team also believes that the service will naturally grow through word of mouth as people invite others to collaborate on products.
“Data science is overdue for the benefits of tools that are cloud and collaboration native,” said Accel partner Vas Natarajan. “This is a fast-growing, dynamic market that’s demanding a successor to incumbent tools. Jakub and his team are building powerful software to modernize data science workflow for teams.”
The new funding will mostly go into hiring and building out the product, with a focus on the overall user experience. Even within the data science community, there are a variety of use cases, after all, and an NLP engineer has different needs from a computer vision engineer.
If the two-year old healthcare startup Verana Health has its way it could become the Google for physician generated healthcare data.
The company has raised $100 million from GV (one of the corporate investment arms of Alphabet, the parent company of Google), Bain Capital Ventures, Casdin Capital and Define Ventures and counts the famous life sciences investor, Brook Byers, as the chairman of the company’s board.
The company offers products like Verana Practice Insights, which provides aggregated views on practice trends across the U.S, and it also has a service called “Trial Connect” which gives physicians the ability to find patients among their practices who may be suitable for clinical trials.
Verana has also built up the Axon Registry, which tracks the impact of treatments over time for conditions like multiple sclerosis, migraines, and epilepsy. The company points to the registry as an example of how the data collected can provide value for the entire healthcare ecosystem.
Verana has already inked data collection deals with the American Academy of Ophthalmology and the American Academy of Neurology to create large pools of de-identified patient data that can be used for drug discovery, population health analysis and medical research. But the company’s story actually begins nearly twenty years ago, when specialty medical associations started building clinical data sets to share information among medical practitioners and standardize reporting required by the federal government.
More recently, as Verana explains, medical associations realized that there was a lot of quality data locked away in those records. And since the medical communities lacked the wherewithal and technical expertise to digitize and analyze those records themselves, two years ago they decided to outsource those services to Verana, according to a blogpost from the company.
“Our society partners have entrusted us with their data to partner with them to advance the quality of patient care and to accelerate the adoption of evidence into practice,” the company states. “Through these partnerships, Verana supports the full operating costs for these registries, which enable physicians to track performance against federal quality measures and submit information for quality reporting at no expense to the physician practices and medical specialty organizations.”
It seems that Verana has made the same pitch to physicians that Google has made to consumers: give us all of your information, and we’ll organize it and manage it for you (as well as collect it to monetize in other ways that physicians have no control over).
Image via Getty Images / Ja_inter
Alongside its new financing, the San Francisco-based company also announced the acquisition of Knoxville, Ten.-based PYA Analytics, a company which has designed data analytics software and services for Medicare and Medicaid.
“Verana Health is building the team and technology to unlock deep clinical insights that support the development of new treatments while increasing our understanding of how these treatments can benefit patients more broadly,” said Dr. Krishna Yeshwant, General Partner at GV. “Under the leadership of its strong management team, Verana continues to redefine how we approach medical research.”
While Verana is currently focused on ophthalmic and neurologic diseases, the company intends to expand into additional therapeutic categories over the next year while integrating imaging, genomic, and claims data sources into its data pools.
“Verana is assembling the most comprehensive datasets in medicine across multiple disease types with the goal of accelerating medical research for patients with ophthalmic and neurologic conditions,” said Miki Kapoor, the chief executive officer of Verana Health, in a statement. “The financing and the addition of PYAA enable us to enrich these large clinical databases, creating a longitudinal view of the complete patient journey to inform research and patient care.”
However, the company’s approach seems to disregard the role of the patient in the healthcare process. The emphasis on de-identification is one that new technology companies consistently rely on; however, evidence tells us that these practices aren’t as secure as consumers would want when it comes to sensitive information around health.
In June 2019, the University of Chicago Medical Center and Google were sued for allegedly violating HIPAA regulations by sharing patient records that weren’t de-identified properly. Google used the research for predictive data analysis based on massive population data. Google and the medical center have both filed motions to dismiss the lawsuit.
But even the U.S. Department of Health and Human Services warned that there’s a risk that de-identified data could be linked back to the corresponding patient. Indeed, new machine learning capabilities developed by companies like Google have already been used to re-identify anonymized patient data, according to a study published in the Journal of the American Medical Association.
Google today announced that Dataset Search, a service that lets you search for close to 25 million different publicly available datasets, is now out of beta. Dataset Search first launched in September 2018.
Researchers can use these datasets, which range from pretty small ones that tell you how many cats there were in the Netherlands from 2010 to 2018 to large annotated audio and image sets, to check their hypotheses or train and test their machine learning models. The tool currently indexes about 6 million tables.
With this release, Dataset Search is getting a mobile version and Google is also adding a few new features to Dataset Search. The first of these is a new filter that lets you choose which type of dataset you want to see (tables, images, text, etc.), which makes it easier to find the right data you’re looking for. In addition, the company has added more information about the datasets and the organizations that publish them.
A lot of the data in the search index comes from government agencies. In total, Google says, there are about 2 million U.S. government datasets in the index right now. But you’ll also regularly find Google’s own Kaggle show up, as well as a number of other public and private organizations that make public data available as well.
As Google notes, anybody who owns an interesting dataset can make it available to be indexed by using a standard schema.org markup to describe the data in more detail.
It’s one thing to develop a working machine learning model, it’s another to put it to work in an application. Cortex Labs is an early stage startup with some open source tooling designed to help data scientists take that last step.
The company’s founders were students at Berkeley when they observed that one of the problems around creating machine learning models was finding a way to deploy them. While there was a lot of open source tooling available, data scientists are not experts in infrastructure.
CEO Omer Spillinger says that infrastructure was something the four members of the founding team — himself, CTO David Eliahu, head of engineering Vishal Bollu and head of growth Caleb Kaiser — understood well.
What the four founders did was take a set of open source tools and combine them with AWS services to provide a way to deploy models more easily. “We take open source tools like TensorFlow, Kubernetes and Docker and we combine them with AWS services like CloudWatch, EKS (Amazon’s flavor of Kubernetes) and S3 to basically give one API for developers to deploy their models,” Spillinger explained.
He says that a data scientist starts by uploading an exported model file to S3 cloud storage. “Then we pull it, containerize it and deploy it on Kubernetes behind the scenes. We automatically scale the workload and automatically switch you to GPUs if it’s compute intensive. We stream logs and expose [the model] to the web. We help you manage security around that, stuff like that,” he said
While he acknowledges this not unlike Amazon SageMaker, the company’s long-term goal is to support all of the major cloud platforms. SageMaker of course only works on the Amazon cloud, while Cortex will eventually work on any cloud. In fact, Spillinger says that the biggest feature request they’ve gotten to this point, is to support Google Cloud. He says that and support for Microsoft Azure are on the road map.
The Cortex founders have been keeping their head above water while they wait for a commercial product with the help of an $888,888 seed round from Engineering Capital in 2018. If you’re wondering about that oddly specific number, it’s partly an inside joke — Spillinger’s birthday is August 8th — and partly a number arrived at to make the valuation work, he said.
For now, the company is offering the open source tools, and building a community of developers and data scientists. Eventually, it wants to monetize by building a cloud service for companies who don’t want to manage clusters — but that is down the road, Spillinger said.
Lately, the venture community’s relationship with advertising tech has been a rocky one.
Advertising is no longer the venture oasis it was in the past, with the flow of VC dollars in the space dropping dramatically in recent years. According to data from Crunchbase, adtech deal flow has fallen at a roughly 10% compounded annual growth rate over the last five years.
While subsectors like privacy or automation still manage to pull in funding, with an estimated 90%-plus of digital ad spend growth going to incumbent behemoths like Facebook and Google, the amount of high-growth opportunities in the adtech space seems to grow narrower by the week.
Despite these pains, funding for marketing technology has remained much more stable and healthy; over the last five years, deal flow in marketing tech has only dropped at a 3.5% compounded annual growth rate according to Crunchbase, with annual invested capital in the space hovering just under $2 billion.
Given the movement in the adtech and martech sectors, we wanted to try to gauge where opportunity still exists in the verticals and which startups may have the best chance at attracting venture funding today. We asked four leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in marketing and advertising:
Several of the firms we spoke to (both included and not included in this survey) stated that they are not actively investing in advertising tech at present.
In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.
Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:
Today’s discussion focuses on virtual influencers: fictional characters that build and engage followings of real people over social media. To explore the topic, I spoke with two experienced entrepreneurs:
In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.
Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:
In a series of three interviews, I’m exploring the startup opportunities in both of these spaces in greater depth. First, Michael Dempsey, a partner at VC firm Compound who has blogged extensively about digital characters, avatars and animation, offers his perspective as an investor hunting for startup opportunities within these spaces.