Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.
Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.
But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers. Together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low-powered microcontrollers.
Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN. But building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between edge hardware and device intelligence now coming to fruition.
Edtech is booming, but a short while ago, many companies in the category were struggling to break through as mainstream offerings. Now, it seems like everyone is clamoring to get into the next seed-stage startup that has the phrase “remote learning” on its About page.
And so begins the normal cycle that occurs when a sector gets overheated — boom, bust and a reckoning. While we’re still in the early days of edtech’s revitalization, it isn’t a gold mine all around the world. Today, in the spirit of balance and history, I’ll present three bearish takes I’ve heard on edtech’s future.
“I think the dividing line there will be there are companies that have been around, that are a little more entrenched, and have good financial runway and can probably survive this cycle,” he said. “They have credibility and will probably get picked [by schools].” The newer companies, he said, might get stuck with adoption because they are at a high degree of risk, and might be giving out free licenses beyond their financial runway right now.
Shortly after its use exploded in the post-office world of COVID-19, Zoom was banned by a variety of private and public actors, including SpaceX and the government of Taiwan. Critics allege its data strategy, particularly its privacy and security measures, were insufficiently robust, especially putting vulnerable populations, like children, at risk. NYC’s Department of Education, for instance, mandated teachers switch to alternative platforms like Microsoft Teams.
This isn’t a problem specific to Zoom. Other technology giants, from Alphabet, Apple to Facebook, have struggled with these strategic data issues, despite wielding armies of lawyers and data engineers, and have overcome them.
To remedy this, data leaders cannot stop at identifying how to improve their revenue-generating functions with data, what the former Chief Data Officer of AIG (one of our co-authors) calls “offensive” data strategy. Data leaders also protect, fight for, and empower their key partners, like users and employees, or promote “defensive” data strategy. Data offense and defense are core to trustworthy data-driven products.
While these data issues apply to most organizations, highly-regulated innovators in industries with large social impact (the “third wave”) must pay special attention. As Steve Case and the World Economic Forum articulate, the next phase of innovation will center on industries that merge the digital and the physical worlds, affecting the most intimate aspects of our lives. As a result, companies that balance insight and trust well, Boston Consulting group predicts, will be the new winners.
Drawing from our work across the public, corporate, and startup worlds, we identify a few “insight killers” — then identify the trustworthy alternative. While trustworthy data strategy should involve end users and other groups outside the company as discussed here, the lessons below focus on the complexities of partnering within organizations, which deserve attention in their own right.
From the beginning of a data project, a trustworthy data leader asks, “Who are our partners and what prevents them from achieving their goals?” In other words: listen. This question can help identify the unmet needs of the 46% of surveyed technology and business teams who found their data groups have little value to offer them.
Putting this to action is the data leader of one highly-regulated AI health startup — Cognoa — who listened to tensions between its defensive and offensive data functions. Cognoa’s Chief AI Officer identified how healthcare data laws, like the Health Insurance Portability and Accountability Act, resulted in friction between his key partners: compliance officers and machine learning engineers. Compliance officers needed to protect end users’ privacy while data and machine learning engineers wanted faster access to data.
To meet these multifaceted goals, Cognoa first scoped down its solution by prioritizing its highest-risk databases. It then connected all of those databases using a single access-and-control layer.
This redesign satisfied its compliance officers because Cognoa’s engineers could then only access health data based on strict policy rules informed by healthcare data regulations. Furthermore, since these rules could be configured and transparently explained without code, it bridged communication gaps between its data and compliance roles. Its engineers were also elated because they no longer had to wait as long to receive privacy-protected copies.
Because its data leader started by listening to the struggles of its two key partners, Cognoa met both its defensive and offensive goals.
London-based Greyparrot, which uses computer vision AI to scale efficient processing of recycling, has bagged £1.825 million (~$2.2M) in seed funding, topping up the $1.2M in pre-seed funding it had raised previously. The latest round is led by early stage European industrial tech investor Speedinvest, with participation from UK-based early stage b2b investor, Force Over Mass.
The 2019 founded startup — and TechCrunch Disrupt SF battlefield alum — has trained a series of machine learning models to recognize different types of waste, such as glass, paper, cardboard, newspapers, cans and different types of plastics, in order to make sorting recycling more efficient, applying digitization and automation to the waste management industry.
Greyparrot points out that some 60% of the 2BN tonnes of solid waste produced globally each year ends up in open dumps and landfill, causing major environmental impact. While global recycling rates are just 14% — a consequence of inefficient recycling systems, rising labour costs, and strict quality requirements imposed on recycled material. Hence the major opportunity the team has lit on for applying waste recognition software to boost recycling efficiency, reduce impurities and support scalability.
By embedding their hardware agnostic software into industrial recycling processes Greyparrot says it can offer real-time analysis on all waste flows, thereby increasing efficiency while enabling a facility to provide quality guarantee to buyers, mitigating against risk.
Currently less than 1% of waste is monitored and audited, per the startup, given the expensive involved in doing those tasks manually. So this is an application of AI that’s not so much taking over a human job as doing something humans essentially don’t bother with, to the detriment of the environment and its resources.
Greyparrot’s first product is an Automated Waste Monitoring System which is currently deployed on moving conveyor belts in sorting facilities to measure large waste flows — automating the identification of different types of waste, as well as providing composition information and analytics to help facilities increase recycling rates.
It partnered with ACI, the largest recycling system integrator in South Korea, to work on early product-market fit. It says the new funding will be used to further develop its product and scale across global markets. It’s also collaborating with suppliers of next-gen systems such as smart bins and sorting robots to integrate its software.
“One of the key problems we are solving is the lack of data,” said Mikela Druckman, co-founder & CEO of Greyparrot in a statement. “We see increasing demand from consumers, brands, governments and waste managers for better insights to transition to a more circular economy. There is an urgent opportunity to optimise waste management with further digitisation and automation using deep learning.”
“Waste is not only a massive market — it builds up to a global crisis. With an increase in both world population and per capita consumption, waste management is critical to sustaining our way of living. Greyparrot’s solution has proven to bring down recycling costs and help plants recover more waste. Ultimately it unlocks the value of waste and creates a measurable impact for the environment,” added Marie-Hélène Ametsreiter, lead partner at Speedinvest Industry, in another statement.
Greyparrot is sitting pretty in another aspect — aligning with several strategic areas of focus for the European Union, which has made digitization of legacy industries, industrial data sharing, investment in AI, plus a green transition to a circular economy core planks of its policy plan for the next five+ years. Just yesterday the Commission announced a €750BN pan-EU support proposal to feed such transitions as part of a wider coronavirus recovery plan for the trading bloc.
Enterprise barcode scanner company Scandit has closed an $80 million Series C round, led by Silicon Valley VC firm G2VP. Atomico, GV, Kreos, NGP Capital, Salesforce Ventures and Swisscom Ventures also participated in the round — which brings its total raised to date to $123M.
The Zurich-based firm offers a platform that combines computer vision and machine learning tech with barcode scanning, text recognition (OCR), object recognition and augmented reality which is designed for any camera-equipped smart device — from smartphones to drones, wearables (e.g. AR glasses for warehouse workers) and even robots.
Use-cases include mobile apps or websites for mobile shopping; self checkout; inventory management; proof of delivery; asset tracking and maintenance — including in healthcare where its tech can be used to power the scanning of patient IDs, samples, medication and supplies.
It bills its software as “unmatched” in terms of speed and accuracy, as well as the ability to scan in bad light; at any angle; and with damaged labels. Target industries include retail, healthcare, industrial/manufacturing, travel, transport & logistics and more.
The latest funding injection follows a $30M Series B round back in 2018. Since then Scandit says it’s tripled recurring revenues, more than doubling the number of blue-chip enterprise customers, and doubling the size of its global team.
Global customers for its tech include the likes of 7-Eleven, Alaska Airlines, Carrefour, DPD, FedEx, Instacart, Johns Hopkins Hospital, La Poste, Levi Strauss & Co, Mount Sinai Hospital and Toyota — with the company touting “tens of billions of scans” per year on 100+ million active devices at this stage of its business.
It says the new funding will go on further pressing on the gas to grow in new markets, including APAC and Latin America, as well as building out its footprint and ops in North America and Europe. Also on the slate: Funding more R&D to devise new ways for enterprises to transform their core business processes using computer vision and AR.
The need for social distancing during the coronavirus pandemic has also accelerated demand for mobile computer vision on personal smart devices, according to Scandit, which says customers are looking for ways to enable more contactless interactions.
Another demand spike it’s seeing is coming from the pandemic-related boom in ‘Click & Collect’ retail and “millions” of extra home deliveries — something its tech is well positioned to cater to because its scanning apps support BYOD (bring your own device), rather than requiring proprietary hardware.
“COVID-19 has shone a spotlight on the need for rapid digital transformation in these uncertain times, and the need to blend the physical and digital plays a crucial role,” said CEO Samuel Mueller in a statement. “Our new funding makes it possible for us to help even more enterprises to quickly adapt to the new demand for ‘contactless business’, and be better positioned to succeed, whatever the new normal is.”
Also commenting on the funding in a supporting statement, Ben Kortlang, general partner at G2VP, added: “Scandit’s platform puts an enterprise-grade scanning solution in the pocket of every employee and customer without requiring legacy hardware. This bridge between the physical and digital worlds will be increasingly critical as the world accelerates its shift to online purchasing and delivery, distributed supply chains and cashierless retail.”
This week could be the biggest week to date for private spaceflight, with landmark launch attempts coming from both Virgin Orbit and SpaceX .
Virgin Orbit is looking to join the elite club of private launch companies that have actually made it to space, with a full flight test of its combined Cosmic Girl and LauncherOne system. Meanwhile, SpaceX is looking to launch its Crew Dragon spacecraft with people on board – achieving a number of milestones, including returning U.S. crew launch capabilities, and human-rating its Falcon 9 rocket.
Virgin Orbit was supposed to launch its first full demonstration flight on Sunday, but a sensor bug that showed up during pre-launch checkouts means that it’s now pushing things back to at least Monday to check that out.
Extra precaution is hardly surprising since this milestone mission could help the company become an operational satellite launch provider – one of only a small handful of private companies that can make that claim.
SpaceX passed its first crucial flight readiness review (FRR) on Friday for its first ever crewed astronaut launch, setting it up for a full rehearsal of the mission on Saturday leading up to the actual launch Now it’s set for another FRR with partner NASA on Monday, and then the launch should take place on Wednesday – weather and checkouts permitting. This will definitely be one to watch.
Mitsubishi Heavy Industries flew its last mission with its H-II series rocket, and the space transfer vehicle it carries to deliver supplies to the International Space Station. The company is readying a successor to this highly successful and consistent rocket, the H3, which is set to make its launch debut sometime in 2022 if all goes to plan.
While SpaceX is aiming to make history with NASA and two of its astronauts, the person in charge of the agency’s human spaceflight endeavors made a surprising and abrupt exit from the agency last week. Doug Loverro resigned from his position, reportedly over some kind of inappropriate activity he engaged in with a prospective agency business partner ahead of the contract awards for NASA’s commercial human lander program.
Xilinx specializes in building processors that are designed to withstand the rigors of use in space, which include heavy radiation exposure, extreme temperatures and plenty more. The company just debuted a new FPGA for space-based applications that is the first 20nm-based processor for space, and the first with dedicated machine-learning capabilities built in for edge computing that truly redefines the term.
Space has enjoyed a period of being relatively uncontested when it comes to international squabbles – mostly because it’s hard and expensive to reach, and the benefits of doing so weren’t exactly clear 30 to 40 years ago when most of those rules were set up. NASA’s new rules include a lot of the old ones, but also set up some modernizations that are sure to begin a lot of debate and discussion in the space policy community.
In a testing procedure, the X-37B Orbital Test Vehicle taxis on the flightline March 30, 2010, at the Astrotech facility in Titusville, FLa. (Courtesy photo)
The United Launch Alliance launched the X-37B last week on behalf of the U.S. Space Force – marking the first time the mysterious experimental unscrewed space plane has launched for that newly-formed agency. The X-37B has flown plenty before, of course – but previously it was doing so under the authority of the U.S. Air Force, since the Space Force hadn’t been formed yet.
High-quality data is the fuel that powers AI algorithms. Without a continual flow of labeled data, bottlenecks can occur and the algorithm will slowly get worse and add risk to the system.
It’s why labeled data is so critical for companies like Zoox, Cruise and Waymo, which use it to train machine learning models to develop and deploy autonomous vehicles. That need is what led to the creation of Scale AI, a startup that uses software and people to process and label image, lidar and map data for companies building machine learning algorithms. Companies working on autonomous vehicle technology make up a large swath of Scale’s customer base, although its platform is also used by Airbnb, Pinterest and OpenAI, among others.
The COVID-19 pandemic has slowed, or even halted, that flow of data as AV companies suspended testing on public roads — the means of collecting billions of images. Scale is hoping to turn the tap back on, and for free.
The company, in collaboration with lidar manufacturer Hesai, launched this week an open-source data set called PandaSet that can be used for training machine learning models for autonomous driving. The data set, which is free and licensed for academic and commercial use, includes data collected using Hesai’s forward-facing PandarGT lidar with image-like resolution, as well as its mechanical spinning lidar known as Pandar64. The data was collected while driving urban areas in San Francisco and Silicon Valley before officials issued stay-at-home orders in the area, according to the company.
“AI and machine learning are incredible technologies with an incredible potential for impact, but also a huge pain in the ass,” Scale CEO and co-founder Alexandr Wang told TechCrunch in a recent interview. “Machine learning is definitely a garbage in, garbage out kind of framework — you really need high-quality data to be able to power these algorithms. It’s why we built Scale and it’s also why we’re using this data set today to help drive forward the industry with an open-source perspective.”
The goal with this lidar data set was to give free access to a dense and content-rich data set, which Wang said was achieved by using two kinds of lidars in complex urban environments filled with cars, bikes, traffic lights and pedestrians.
“The Zoox and the Cruises of the world will often talk about how battle-tested their systems are in these dense urban environments,” Wang said. “We wanted to really expose that to the whole community.”
The data set includes more than 48,000 camera images and 16,000 lidar sweeps — more than 100 scenes of 8s each, according to the company. It also includes 28 annotation classes for each scene and 37 semantic segmentation labels for most scenes. Traditional cuboid labeling, those little boxes placed around a bike or car, for instance, can’t adequately identify all of the lidar data. So, Scale uses a point cloud segmentation tool to precisely annotate complex objects like rain.
Open sourcing AV data isn’t entirely new. Last year, Aptiv and Scale released nuScenes, a large-scale data set from an autonomous vehicle sensor suite. Argo AI, Cruise and Waymo were among a number of AV companies that have also released data to researchers. Argo AI released curated data along with high-definition maps, while Cruise shared a data visualization tool it created called Webviz that takes raw data collected from all the sensors on a robot and turns that binary code into visuals.
Scale’s efforts are a bit different; for instance, Wang said the license to use this data set doesn’t have any restrictions.
“There’s a big need right now and a continual need for high-quality labeled data,” Wang said. “That’s one of the biggest hurdles overcome when building self-driving systems. We want to democratize access to this data, especially at a time when a lot of the self-driving companies can’t collect it.”
That doesn’t mean Scale is going to suddenly give away all of its data. It is, after all a for-profit enterprise. But it’s already considering collecting and open sourcing fresher data later this year.
Facilities management looks to be having a bit of a moment, amid the coronavirus pandemic.
VergeSense, a U.S. startup that sells a “sensor as a system” platform targeted at offices — supporting features such as real-time occupant counts and foot-traffic-triggered cleaning notifications — has closed a $9 million strategic investment led by Allegion Ventures, a corporate VC fund of security giant Allegion.
JLL Spark, Metaprop, Y Combinator, Pathbreaker Ventures, and West Ventures also participated in the round, which brings the total funding raised by the 2017-founded startup to $10.6M including an earlier seed round.
VergeSense tells TechCrunch it’s seen accelerated demand in recent weeks as office owners and managers try to figure out how to make workspaces safe in the age of COVID-19 — claiming bookings are “on track” to be up 500% quarter over quarter. (Though it admits business did also take a hit earlier in the year, saying there was “aftershock” once the coronavirus hit.)
So while, prior to the pandemic, VergeSense customers likely wanted to encourage so called ‘workplace collisions’ — i.e. close encounters between office staff in the hopes of encouraging idea sharing and collaboration — right now the opposite is the case, with social distancing and looming limits on room occupancy rates looking like a must-have for any reopening offices.
Luckily for VergeSense, its machine learning platform and sensor packed hardware can derive useful measurements just the same.
It’s worked with customers to come up with relevant features, such as a new Social Distancing Score and daily occupancy reports. While it already had a Smart Cleaning Planner feature which it reckons will now be in high demand. It also envisages customers being able to plug into its open API to power features in their own office apps that could help to reassure staff it’s okay to come back in to work, such as indicating quiet zones or times where there are fewer office occupants on site.
Of course plenty of offices may remain closed for some considerable time or even for good — Twitter, for example, has told staff they can work remotely forever — with home working a viable job for much office work. But VergeSense and its investors believe the office will prevail in some form, but with smart sensor tech that can (for example) detect the distance between people becoming a basic requirement.
“I think it’s going to less overall office space,” says VergeSense co-founder Dan Ryan, discussing how he sees the office being changed by COVID-19. “A lot of customers are rethinking the need to have tonnes of smaller, regional offices. They’re thinking about still maintaining their big hubs but maybe what those hubs actually look like is different.
“Maybe post-COVID, instead of people coming into the office five days a week… for people that don’t necessarily need to be in the office to do their work everyday maybe three days a week or two days a week. And that probably means a different type of office, right. Different layout, different type of desks etc.”
“That trend was already in motion but a lot of companies were reluctant to experiment with remote work because they weren’t sure about the impact on productivity and that sort of thing, there was a lot of cultural friction associated with that. But now we all got thrust into that simultaneously and it’s happening all at once — and we think that’s going to stick,” he adds. “We’ve head that feedback consistently from basically all of our customers.”
“A lot of our existing customers are pulling forward adoption of the product. Usually the way we roll out is customers will do a couple of buildings to get started and it’ll be phased rollout plan from there. But now that the use-case for this data is more connected to safety and compliance, with COVID-19, around occupancy management — there’s CDC guidelines [related to building occupancy levels] — now to have a tool that can measure and report against that is viewed as more of a mission critical type thing.”
VergeSense is processing some 6 million sensor reports per day at this point for nearly 70 customers, including 40 FORTUNE 1000 companies. In total it says it provides its sensor hardware plus SaaS across 20 million sqft, 250 office buildings, and 15 countries.
“There’s an extreme bear case here — that the office is going to disappear,” Ryan adds. “That’s something that we don’t see happening because the office does have a purpose, rooted in — primarily — human social interaction and physical collaboration.
“As much as we love Zoom and the efficiency of that there is a lot that gets lost without that physical collaboration, connection, all the social elements that are built around work.”
VergeSense’s new funding will go on scaling up to meet the increased demand it’s seeing due to COVID and for scaling its software analytics platform.
It’s also going to be spending on product development, per Ryan, with alternative sensor hardware form factors in the works — including “smaller, better, faster” sensor hardware and “some additional data feeds”.
“Right now it’s primarily people counting but there’s a lot of interest in other data about the built environment beyond that — more environmental types of stuff,” he says of the additional data feeds it’s looking to add. “We’re more interested in other types of ambient data about the environment. What’s the air quality on this floor? Temperature, humidity. General environmental data that’s getting even more interest frankly from customers now.
“There is a fair amount of interest in wellness of buildings. Historically that’s been more of a nice to have thing. But now there’s huge interest in what is the air quality of this space — are the environmental conditions appropriate? I think the expectations from employees are going to be much higher. When you walk into an office building you want the air to be good, you want it to look nicer — and that’s why I think the acceleration [of smart spaces]; that’s a trend that was already in motion but people are going to double down and want it to accelerate even faster.”
Commenting on the funding in a statement, Rob Martens, president of Allegion Ventures, added: “In the midst of a world crisis, [the VergeSense team] have quickly positioned themselves to help senior business leaders ensure safer workspaces through social distancing, while at the same time still driving productivity, engagement and cost efficiency. VergeSense is on the leading edge of creating data-driven workspaces when it matters most to the global business community and their employees.”
Apollo Agriculture believes it can attain profits by helping Kenya’s smallholder farmers maximize theirs.
That’s the mission of the Nairobi-based startup that raised $6 million in Series A funding led by Anthemis.
Founded in 2016, Apollo Agriculture offers a mobile-based product suit for farmers that includes working capital, data analysis for higher crop yields and options to purchase key inputs and equipment.
“It’s everything a farmer needs to succeed. It’s the seeds and fertilizer they need to plant, the advice they need to manage that product over the course of the season. The insurance they need to protect themselves in case of a bad year…and then, ultimately, the financing,” Apollo Agriculture CEO Eli Pollak told TechCrunch on a call.
Apollo’s addressable market includes the many smallholder farmers across Kenya’s population of 53 million. The problem it’s helping them solve is a lack of access to the tech and resources to achieve better results on their plots.
The startup has engineered its own app, platform and outreach program to connect with Kenya’s farmers. Apollo uses M-Pesa mobile money, machine learning and satellite data to guide the credit and products it offers them.
The company — which was a TechCrunch Startup Battlefield Africa 2018 finalist — has served over 40,000 farmers since inception, with 25,000 of those paying relationships coming in 2020, according to Pollak.
Apollo Agriculture co-founders Benjamin Njenga and Eli Pollak
Apollo Agriculture generates revenues on the sale of farm products and earning margins on financing. “The farm pays a fixed price for the package, which comes due at harvest…that includes everything and there’s no hidden fees,” said Pollak.
On deploying the $6 million in Series A financing, “It’s really about continuing to invest in growth. We feel like we’ve got a great product. We’ve got great reviews by customers and want to just keep scaling it,” he said. That means hiring, investing in Apollo’s tech and growing the startup’s sales and marketing efforts.
“Number two is really strengthening our balance sheet to be able to continue raising the working capital that we need to lend to customers,” Pollak said.
For the moment, expansion in Africa beyond Kenya is in the cards but not in the near-term. “That’s absolutely on the roadmap,” said Pollak. “But like all businesses, everything is a bit in flux right now. So some of our plans for immediate expansion are on a temporary pause as we wait to see things shake out with with COVID.”
Apollo Agriculture’s drive to boost the output and earnings of Africa’s smallholder farmers is born out of the common interests of its co-founders.
Pollak is an American who studied engineering at Stanford University and went to work in agronomy in the U.S. with The Climate Corporation. “That was how I got excited about Apollo. I would look at other markets and say, ‘wow, they’re farming 20% more acres of maize, or corn, across Africa but farmers are producing dramatically less than U.S. farmers,'” said Pollak.
Pollak’s colleague, co-founder Benjamin Njenga, found inspiration from his experience in his upbringing. “I grew up on a farm in a Kenyan village. My mother, a smallholder farmer, used to plant with low-quality seeds and no fertilizer and harvested only five bags per acre each year,” he told the audience in 2018 at Startup Battlefield Africa in Lagos.
Image Credits: Apollo Agriculture”We knew if she’d used fertilizer and hybrid seeds her production would double, making it easier to pay my school fees.” Njenga went on to explain that she couldn’t access the credit to buy those tools, which prompted the motivation for Apollo Agriculture.
Anthemis Exponential Ventures’ Vica Manos confirmed its lead on Apollo’s latest raise. The UK-based VC firm — which invests mostly in the Europe and the U.S. — has also backed South African fintech company Jumo and will continue to consider investments in African startups, Manos told TechCrunch.
Additional investors in Apollo Agriculture’s Series A round included Accion Venture Lab, Leaps by Bayer and Flourish Ventures.
While agriculture is the leading employer in Africa, it hasn’t attracted the same attention from venture firms or founders as fintech, logistics or e-commerce. The continent’s agtech startups lagged those sectors in investment, according to Disrupt Africa and WeeTracker’s 2019 funding reports.
Some notable agtech ventures that have gained VC include Nigeria’s Farmcrowdy, Hello Tractor — which has partnered with IBM — and Twiga Foods, a Goldman-backed B2B agriculture supply chain startup based in Nairobi.
On whether Apollo Agriculture sees Twiga as a competitor, CEO Eli Pollak suggested collaboration. “Twiga could be a company that in the future we could potentially partner with,” he said.
“We’re partnering with farmers to produce lots of high-quality crops, and they could potentially be a great partner in helping those farmers access stable prices for those…yields.”
At its Build developer conference, Microsoft today put a strong emphasis on machine learning. But in addition to plenty of new tools and features, the company also highlighted its work on building more responsible and fairer AI systems — both in the Azure cloud and Microsoft’s open-source toolkits.
These include new tools for differential privacy and a system for ensuring that models work well across different groups of people, as well as new tools that enable businesses to make the best use of their data while still meeting strict regulatory requirements.
As developers are increasingly tasked to learn how to build AI models, they regularly have to ask themselves whether the systems are “easy to explain” and that they “comply with non-discrimination and privacy regulations,” Microsoft notes in today’s announcement. But to do that, they need tools that help them better interpret their models’ results. One of those is interpretML, which Microsoft launched a while ago, but also the Fairlearn toolkit, which can be used to assess the fairness of ML models, and which is currently available as an open-source tool and which will be built into Azure Machine Learning next month.
As for differential privacy, which makes it possible to get insights from private data while still protecting private information, Microsoft today announced WhiteNoise, a new open-source toolkit that’s available both on GitHub and through Azure Machine Learning. WhiteNoise is the result of a partnership between Microsoft and Harvard’s Institute for Quantitative Social Science.
If that name sounds familiar, it’s probably because you remember that Microsoft acquired Bonsai, a company that focuses on machine teaching, back in 2018. Bonsai combined simulation tools with different machine learning techniques to build a general-purpose deep reinforcement learning platform, with a focus on industrial control systems.
It’s maybe no surprise then that Project Bonsai, too, has a similar focus on helping businesses teach and manage their autonomous machines. “With Project Bonsai, subject-matter experts can add state-of-the-art intelligence to their most dynamic physical systems and processes without needing a background in AI,” the company notes in its press materials.
“The public preview of Project Bonsai builds on top of the Bonsai acquisition and the autonomous systems private preview announcements made at Build and Ignite of last year,” a Microsoft spokesperson told me.
Interestingly, Microsoft notes that project Bonsai is only the first block of a larger vision to help its customers build these autonomous systems. The company also stresses the advantages of machine teaching over other machine learning approach, especially the fact that it’s less of a black box approach than other methods, which makes it easier for developers and engineers to debug systems that don’t work as expected.
In addition to Bonsai, Microsoft also today announced Project Moab, an open-source balancing robot that is meant to help engineers and developers learn the basics of how to build a real-world control system. The idea here is to teach the robot to keep a ball balanced on top of a platform that is held by three arms.
Potential users will be able to either 3D print the robot themselves or buy one when it goes on sale later this year. There is also a simulation, developed by MathWorks, that developers can try out immediately.
“You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead,” said Mark Hammond, Microsoft General Manager
for Autonomous Systems. “The point of the Project Moab system is to provide that
playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”
Space-specific silicon company Xilinx has developed a new processor for in-space and satellite applications that records a number of firsts: It’s the first 20nm process that’s rated for use in space, offering power and efficiency benefits, and it’s the first to offer specific support for high performance machine learning through neural network-based inference acceleration.
The processor is a field programmable gate array (FPGA), meaning that customers can tweak the hardware to suit their specific needs since the chip is essentially user-configurable hardware. On the machine learning side, Xilinx says that the new processor will offer up to 5.7 tera operations per second of “peak INT8 performance optimized for deep learning,” which is an improvement of as much as 25x vs the previous generation.
Xilinx’s new chip has a lot of potential for the satellite market for a couple of reasons: First, it’s a huge leap in terms of processor size, since the company’s existing traditional tolerant silicon was offered in a 65nm spec only. That means big improvements in terms of its size, weight and power efficiency, all of which translates to very important savings when you’re talking about in-space applications, since satellites are designed to be as lightweight and compact as possible to help defray launch costs and in-space propellant needs, both of which represent major expenses in their operation.
Finally, its reconfigurable nature means that on-orbit assets can be reprogrammed on-demand to handle different tasks – which now include local machine learning algorithm processing. That means you could theoretically switch one of these in an Earth observation satellite from handling something like tracking cloud density and weather patterns, to making inferences about deforestation or strip mining, for instance. That’s a whole lot of added flexibility for satellite constellation operators looking to move where market demand is needed most.
Xilinx’s chips are special in a number of ways vs. the kind we use here on Earth, including with the aforementioned radiation tolerance. They also come packed in thick ceramic packaging which add extra durability both during the launch phase, where stresses include extreme vibration, and on orbit where the lack of an atmosphere means exposure to an extremely harsh environment in terms f both radiation and temperature.
With a large proportion of knowledge workers doing now doing their jobs from home, the need for tools to help them feel connected to their profession can be as important as tools to, more practically, keep them connected. Today, a company whose platform helps do precisely that is announcing a growth round of funding after seeing engagement on the platform triple in the last month.
GO1.com, an online learning platform focused specifically on professional training courses (both those to enhance a worker’s skills as well as those needed for company compliance training), is today announcing that it has raised $40 million in funding, a Series C that it plans to use to continue expanding its business, which started out in Brisbane, Australia and now has its operations also based out of San Francisco. (It was part of a Y Combinator cohort back in 2015.) Specifically, it wants to continue growth in North America, and to continue expanding its partner network.
It’s not disclosing its valuation but we are asking. It’s worth pointing out that not only has GO1 seen engagement triple in the last month as people turn to online learning as one way of keeping users connected to their professional lives as they work among children and house pets, noisy neighbours, dirty laundry, sourdough starters, and the rest — and that’s before you count the harrowing news we are hit with on a regular basis. But even beyond that, longer term GO1 has shown some strong signs that speak of its traction.
It counts the likes of the University of Oxford, Suzuki, Asahi and Thrifty among its 3,000+ customers, with more than 1.5 million users overall able to access over 170,000 courses and other resources provided by some 100 vetted content partners. Overall usage has grown five-fold over the last 12 months. (GO1 works both with in-house learning management systems or provides its own.)
“GO1’s growth over the last couple of months has been unprecedented and the use of online tools for training is now undergoing a structural shift,” said Andrew Barnes, CEO of GO1, in a statement. “It is gratifying to fill an important void right now as workers embrace online solutions. We are inspired about the future that we are building as we expand our platform with new mediums that reach millions of people every day with the content they need.”
The funding is coming from a very strong list of backers: it’s being co-led by Madrona and SEEK — the online recruitment and course directory company that has backed a number of edtech startups, including FutureLearn and Coursera — with participation also from Microsoft’s venture arm M12; new backer Salesforce Ventures, the investing arm of the CRM giant; and Our Innovation Fund.
Microsoft is a strategic backer: GO1 integrated with Teams, so now users can access GO1 content directly via Microsoft’s enterprise-facing video and messaging platform.
“GO1 has been critical for business continuity as organizations navigate the remote realities of COVID-19,” said Nagraj Kashyap, Microsoft Corporate Vice President and Global Head of M12, in a statement. “The GO1 integration with Microsoft Teams offers a seamless learning experience at a time when 75 million people are using the application daily. We’re proud to invest in a solution helping keep employees learning and businesses growing through this time.”
Similarly, Salesforce is also coming in as a strategic, integrating this into its own online personal development products and initiatives.
“We are excited about partnering with GO1 as it looks to scale its online content hub globally. While the majority of corporate learning is done in person today, we believe the new digital imperative will see an acceleration in the shift to online learning tools. We believe GO1 fits well into the Trailhead ecosystem and our vision of creating the life-long learner journey,” said Rob Keith, Head of Australia, Salesforce Ventures, in a statement.
Working remotely has raised a whole new set of challenges for organizations, especially those whos employees typically have not worked for days, weeks and months outside of the office. Some of these have been challenges of a more basic IT nature: getting secure access to systems on the right kinds of machines and making sure people can communicate in the ways that they need to to get work done.
But others are more nuanced and long-term: making sure people remain focused and motivated and in a healthy state of mind about work. Education is one way of getting them focused in the latter way: professional development is not only useful for the person to do her or his job better, but it’s a way to motivate them and focus their minds, and rest from routine, in a way that still remains relevant to work.
GO1 is absolutely not the only company pursuing this opportunity. Others include Udemy and Coursera, which have both come to enterprise after initially focusing more on traditional education plays. And LinkedIn Learning (which used to be known as Lynda, before LinkedIn acquired it and shifted the branding) was a trailblazer in this space.
For these, enterprise training sits in a different strategic place to GO1, which started out with compliance training and onboarding of employees before gravitating into a much wider set of topics that range from photography and design, through to Java, accounting, and even yoga and mindfulness training and everything in between.
It’s perhaps the directional approach, alongside its success, that have set GO1 apart from the competition and that has attracted the investment, which seems to have come ahead even of the current boost in usage.
“We met GO1 many months before COVID-19 was on the tip of everyone’s tongue and were impressed then with the growth of the platform and the ability of the team to expand their corporate training offering significantly in North America and Europe,” commented S. Somasegar, managing director, Madrona Venture Group, in a statement. “The global pandemic has only increased the need to both provide training and retraining – and also to do it remotely. GO1 is an important link in the chain of recovery.” As part of the funding Somasegar will join the GO1 board of directors.
Notably, GO1 is currently making all COVID-19 related learning resources available for free “to help teams continue to perform and feel supported during this time of disruption and change,” the company said.
As federal agencies take increasingly stringent actions to try to limit the spread of the novel coronavirus pandemic within the U.S., how can individual Americans and U.S. companies affected by these rules weigh in with their opinions and experiences? Because many of the new rules, such as travel restrictions and increased surveillance, require expansions of federal power beyond normal circumstances, our laws require the federal government to post these rules publicly and allow the public to contribute their comments to the proposed rules online. But are federal public comment websites — a vital institution for American democracy — secure in this time of crisis? Or are they vulnerable to bot attack?
In December 2019, we published a new study to see firsthand just how vulnerable the public comment process is to an automated attack. Using publicly available artificial intelligence (AI) methods, we successfully generated 1,001 comments of deepfake text, computer-generated text that closely mimics human speech, and submitted them to the Centers for Medicare & Medicaid Services’ (CMS) website for a proposed federal rule that would institute mandatory work reporting requirements for citizens on Medicaid in Idaho.
The comments we produced using deepfake text constituted over 55% of the 1,810 total comments submitted during the federal public comment period. In a follow-up study, we asked people to identify whether comments were from a bot or a human. Respondents were only correct half of the time — the same probability as random guessing.
Image Credits: Zang/Weiss/Sweeney
The example above is deepfake text generated by the bot that all survey respondents thought was from a human.
We ultimately informed CMS of our deepfake comments and withdrew them from the public record. But a malicious attacker would likely not do the same.
Previous large-scale fake comment attacks on federal websites have occurred, such as the 2017 attack on the FCC website regarding the proposed rule to end net neutrality regulations.
During the net neutrality comment period, firms hired by industry group Broadband for America used bots to create comments expressing support for the repeal of net neutrality. They then submitted millions of comments, sometimes even using the stolen identities of deceased voters and the names of fictional characters, to distort the appearance of public opinion.
A retroactive text analysis of the comments found that 96-97% of the more than 22 million comments on the FCC’s proposal to repeal net neutrality were likely coordinated bot campaigns. These campaigns used relatively unsophisticated and conspicuous search-and-replace methods — easily detectable even on this mass scale. But even after investigations revealed the comments were fraudulent and made using simple search-and-replace-like computer techniques, the FCC still accepted them as part of the public comment process.
Even these relatively unsophisticated campaigns were able to affect a federal policy outcome. However, our demonstration of the threat from bots submitting deepfake text shows that future attacks can be far more sophisticated and much harder to detect.
Let’s be clear: The ability to communicate our needs and have them considered is the cornerstone of the democratic model. As enshrined in the Constitution and defended fiercely by civil liberties organizations, each American is guaranteed a role in participating in government through voting, through self-expression and through dissent.
Image Credits: Zang/Weiss/Sweeney
When it comes to new rules from federal agencies that can have sweeping impacts across America, public comment periods are the legally required method to allow members of the public, advocacy groups and corporations that would be most affected by proposed rules to express their concerns to the agency and require the agency to consider these comments before they decide on the final version of the rule. This requirement for public comments has been in place since the passage of the Administrative Procedure Act of 1946. In 2002, the e-Government Act required the federal government to create an online tool to receive public comments. Over the years, there have been multiple court rulings requiring the federal agency to demonstrate that they actually examined the submitted comments and publish any analysis of relevant materials and justification of decisions made in light of public comments [see Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U. S. 402, 416 (1971); Home Box Office, supra, 567 F.2d at 36 (1977), Thompson v. Clark, 741 F. 2d 401, 408 (CADC 1984)].
In fact, we only had a public comment website from CMS to test for vulnerability to deepfake text submissions in our study, because in June 2019, the U.S. Supreme Court ruled in a 7-1 decision that CMS could not skip the public comment requirements of the Administrative Procedure Act in reviewing proposals from state governments to add work reporting requirements to Medicaid eligibility rules within their state.
The impact of public comments on the final rule by a federal agency can be substantial based on political science research. For example, in 2018, Harvard University researchers found that banks that commented on Dodd-Frank-related rules by the Federal Reserve obtained $7 billion in excess returns compared to non-participants. When they examined the submitted comments to the “Volcker Rule” and the debit card interchange rule, they found significant influence from submitted comments by different banks during the “sausage-making process” from the initial proposed rule to the final rule.
Beyond commenting directly using their official corporate names, we’ve also seen how an industry group, Broadband for America, in 2017 would submit millions of fake comments in support of the FCC’s rule to end net neutrality in order to create the false perception of broad political support for the FCC’s rule amongst the American public.
While our study highlights the threat of deepfake text to disrupt public comment websites, this doesn’t mean we should end this long-standing institution of American democracy, but rather we need to identify how technology can be used for innovative solutions that accepts public comments from real humans while rejecting deepfake text from bots.
There are two stages in the public comment process — (1) comment submission and (2) comment acceptance — where technology can be used as potential solutions.
In the first stage of comment submission, technology can be used to prevent bots from submitting deepfake comments in the first place; thus raising the cost for an attacker to need to recruit large numbers of humans instead. One technological solution that many are already familiar with are the CAPTCHA boxes that we see at the bottom of internet forms that ask us to identify a word — either visually or audibly — before being able to click submit. CAPTCHAs provide an extra step that makes the submission process increasingly difficult for a bot. While these tools can be improved for accessibility for disabled individuals, they would be a step in the right direction.
However, CAPTCHAs would not prevent an attacker willing to pay for low-cost labor abroad to solve any CAPTCHA tests in order to submit deepfake comments. One way to get around that may be to require strict identification to be provided along with every submission, but that would remove the possibility for anonymous comments that are currently accepted by agencies such as CMS and the Food and Drug Administration (FDA). Anonymous comments serve as a method of privacy protection for individuals who may be significantly affected by a proposed rule on a sensitive topic such as healthcare without needing to disclose their identity. Thus, the technological challenge would be to build a system that can separate the user authentication step from the comment submission step so only authenticated individuals can submit a comment anonymously.
Finally, in the second stage of comment acceptance, better technology can be used to distinguish between deepfake text and human submissions. While our study found that our sample of over 100 people surveyed were not able to identify the deepfake text examples, more sophisticated spam detection algorithms in the future may be more successful. As machine learning methods advance over time, we may see an arms race between deepfake text generation and deepfake text identification algorithms.
While future technologies may offer more comprehensive solutions, the threat of deepfake text to our American democracy is real and present today. Thus, we recommend that all federal public comment websites adopt state-of-the-art CAPTCHAs as an interim measure of security, a position that is also supported by the 2019 U.S. Senate Subcommittee on Investigations’ Report on Abuses of the Federal Notice-and-Comment Rulemaking Process.
In order to develop more robust future technological solutions, we will need to build a collaborative effort between the government, researchers and our innovators in the private sector. That’s why we at Harvard University have joined the Public Interest Technology University Network along with 20 other education institutions, New America, the Ford Foundation and the Hewlett Foundation. Collectively, we are dedicated to helping inspire a new generation of civic-minded technologists and policy leaders. Through curriculum, research and experiential learning programs, we hope to build the field of public interest technology and a future where technology is made and regulated with the public in mind from the beginning.
While COVID-19 has disrupted many parts of American society, it hasn’t stopped federal agencies under the Trump administration from continuing to propose new deregulatory rules that can have long-lasting legacies that will be felt long after the current pandemic has ended. For example, on March 18, 2020, the Environmental Protection Agency (EPA) proposed new rules about limiting which research studies can be used to support EPA regulations, which have received over 610,000 comments as of April 6, 2020. On April 2, 2020, the Department of Education proposed new rules for permanently relaxing regulations for online education and distance learning. On February 19, 2020, the FCC re-opened public comments on its net neutrality rules, which in 2017 saw 22 million comments submitted by bots, after a federal court ruled that the FCC ignored how ending net neutrality would affect public safety and cellphone access programs for low-income Americans.
Federal public comment websites offer the only way for the American public and organizations to express their concerns to the federal agency before the final rules are determined. We must adopt better technological defenses to ensure that deepfake text doesn’t further threaten American democracy during a time of crisis.
What’s been overlooked in the wake of such workflow-specific tools has been the base class of products that enterprises are using to build the core of their machine learning (ML) workflows, and the shift in focus toward automating the deployment and governance aspects of the ML workflow.
That’s where MLOps comes in, and its popularity has been fueled by the rise of core ML workflow platforms such as Boston-based DataRobot. The company has raised more than $430 million and reached a $1 billion valuation this past fall serving this very need for enterprise customers. DataRobot’s vision has been simple: enabling a range of users within enterprises, from business and IT users to data scientists, to gather data and build, test and deploy ML models quickly.
Founded in 2012, the company has quietly amassed a customer base that boasts more than a third of the Fortune 50, with triple-digit yearly growth since 2015. DataRobot’s top four industries include finance, retail, healthcare and insurance; its customers have deployed over 1.7 billion models through DataRobot’s platform. The company is not alone, with competitors like H20.ai, which raised a $72.5 million Series D led by Goldman Sachs last August, offering a similar platform.
Why the excitement? As artificial intelligence pushed into the enterprise, the first step was to go from data to a working ML model, which started with data scientists doing this manually, but today is increasingly automated and has become known as “auto ML.” An auto-ML platform like DataRobot’s can let an enterprise user quickly auto-select features based on their data and auto-generate a number of models to see which ones work best.
As auto ML became more popular, improving the deployment phase of the ML workflow has become critical for reliability and performance — and so enters MLOps. It’s quite similar to the way that DevOps has improved the deployment of source code for applications. Companies such as DataRobot and H20.ai, along with other startups and the major cloud providers, are intensifying their efforts on providing MLOps solutions for customers.
We sat down with DataRobot’s team to understand how their platform has been helping enterprises build auto-ML workflows, what MLOps is all about and what’s been driving customers to adopt MLOps practices now.
“Assembly” may sound like one of the simpler tests in the manufacturing process, but as anyone who’s ever put together a piece of flat-pack furniture knows, it can be surprisingly (and frustratingly) complex. Invisible AI is a startup that aims to monitor people doing assembly tasks using computer vision, helping maintain safety and efficiency — without succumbing to the obvious all-seeing-eye pitfalls. A $3.6 million seed round ought to help get them going.
The company makes self-contained camera-computer units that run highly optimized computer vision algorithms to track the movements of the people they see. By comparing those movements with a set of canonical ones (someone performing the task correctly), the system can watch for mistakes or identify other problems in the workflow — missing parts, injuries and so on.
Obviously, right at the outset, this sounds like the kind of thing that results in a pitiless computer overseer that punishes workers every time they fall below an artificial and constantly rising standard — and Amazon has probably already patented that. But co-founder and CEO Eric Danziger was eager to explain that this isn’t the idea at all.
“The most important parts of this product are for the operators themselves. This is skilled labor, and they have a lot of pride in their work,” he said. “They’re the ones in the trenches doing the work, and catching and correcting mistakes is a big part of it.”
“These assembly jobs are pretty athletic and fast-paced. You have to remember the 15 steps you have to do, then move on to the next one, and that might be a totally different variation. The challenge is keeping all that in your head,” he continued. “The goal is to be a part of that loop in real time. When they’re about to move on to the next piece we can provide a double check and say, ‘Hey, we think you missed step 8.’ That can save a huge amount of pain. It might be as simple as plugging in a cable, but catching it there is huge — if it’s after the vehicle has been assembled, you’d have to tear it down again.”
This kind of body tracking exists in various forms and for various reasons; Veo Robotics, for instance, uses depth sensors to track an operator and robot’s exact positions to dynamically prevent collisions.
But the challenge at the industrial scale is less “how do we track a person’s movements in the first place” than “how can we easily deploy and apply the results of tracking a person’s movements.” After all, it does no good if the system takes a month to install and days to reprogram. So Invisible AI focused on simplicity of installation and administration, with no code needed and entirely edge-based computer vision.
“The goal was to make it as easy to deploy as possible. You buy a camera from us, with compute and everything built in. You install it in your facility, you show it a few examples of the assembly process, then you annotate them. And that’s less complicated than it sounds,” Danziger explained. “Within something like an hour they can be up and running.”
Once the camera and machine learning system is set up, it’s really not such a difficult problem for it to be working on. Tracking human movements is a fairly straightforward task for a smart camera these days, and comparing those movements to an example set is comparatively easy, as well. There’s no “creativity” involved, like trying to guess what a person is doing or match it to some huge library of gestures, as you might find in an AI dedicated to captioning video or interpreting sign language (both still very much works in progress elsewhere in the research community).
As for privacy and the possibility of being unnerved by being on camera constantly, that’s something that has to be addressed by the companies using this technology. There’s a distinct possibility for good, but also for evil, like pretty much any new tech.
One of Invisible’s early partners is Toyota, which has been both an early adopter and skeptic when it comes to AI and automation. Their philosophy, one that has been arrived at after some experimentation, is one of empowering expert workers. A tool like this is an opportunity to provide systematic improvement that’s based on what those workers already do.
It’s easy to imagine a version of this system where, like in Amazon’s warehouses, workers are pushed to meet nearly inhuman quotas through ruthless optimization. But Danziger said that a more likely outcome, based on anecdotes from companies he’s worked with already, is more about sourcing improvements from the workers themselves.
Having built a product day in and day out year after year, these are employees with deep and highly specific knowledge on how to do it right, and that knowledge can be difficult to pass on formally. “Hold the piece like this when you bolt it or your elbow will get in the way” is easy to say in training but not so easy to make standard practice. Invisible AI’s posture and position detection could help with that.
“We see less of a focus on cycle time for an individual, and more like, streamlining steps, avoiding repetitive stress, etc.,” Danziger said.
Importantly, this kind of capability can be offered with a code-free, compact device that requires no connection except to an intranet of some kind to send its results to. There’s no need to stream the video to the cloud for analysis; footage and metadata are both kept totally on-premise if desired.
Like any compelling new tech, the possibilities for abuse are there, but they are not — unlike an endeavor like Clearview AI — built for abuse.
“It’s a fine line. It definitely reflects the companies it’s deployed in,” Danziger said. “The companies we interact with really value their employees and want them to be respected and engaged in the process as possible. This helps them with that.”
The $3.6 million seed round was led by 8VC, with participating investors including iRobot Corporation, K9 Ventures, Sierra Ventures and Slow Ventures.
VC fund Runa Capital was launched with $135 million in 2010, and is perhaps best known for its investment into NGINX, which powers many web sites today. In more recent years it has participated or led investments into startups such as Zipdrug ($10.8 million); Rollbar this year ($11 million); and Monedo (for €20 million).
HQ’d in San Francisco, it has now completed the final closing on its $157 million Runa Capital Fund III, which, they say, exceeded its original target of $135 million.
The firm typically invests between $1 million and $10 million in early-stage companies, predominantly Series A rounds, and has a strong interest in cloud infrastructure, open-source software, AI and machine intelligence and B2B SaaS, in markets such as finance, education and healthcare.
Dmitry Chikhachev, co-founder and managing partner of Runa Capital, said in a statement: “We are excited to see many of our portfolio companies’ founders investing in Runa Capital III, along with tech-savvy LPs from all parts of the world, who supported us in all of our funds from day one… We invested in deep tech long before it became the mainstream for venture capital, betting on Nginx in 2011, Wallarm and ID Quantique in 2013, and MariaDB in 2014.”
Going forward the firm says it aims to concentrate much of its firepower in the realm of machine learning and quantum computing.
In addition, Jinal Jhaveri, ex-CEO & founder of SchoolMint, a former portfolio company of Runa Capital which was acquired by Hero K12, has joined the firm as a venture partner.
Runa operates out of its HQ in Palo Alto to its offices throughout Europe. Its newest office opened in Berlin in early 2020, given Runa Capital’s growing German portfolio. German investments have included Berlin-based Smava and Mambu, as well as the recently added Monedo (formerly Kreditech), Vehiculum and N8N (a co-investment with Sequoia Capital) . Other investments made from the third fund include Rollbar, Reelgood, Forest Admin, Uploadcare and Oxygen.
N8N and three other startups were funded through Runa Capital’s recently established seed program that focuses on smaller investments up to $100,000.
In the early 2000s, VMware introduced the world to virtual servers that allowed IT to make more efficient use of idle server capacity. Today, Run:AI is introducing that same concept to GPUs running containerized machine learning projects on Kubernetes.
This should enable data science teams to have access to more resources than they would normally get were they simply allocated a certain number of available GPUs. Company CEO and co-founder Omri Geller says his company believes that part of the issue in getting AI projects to market is due to static resource allocation holding back data science teams.
“There are many times when those important and expensive computer sources are sitting idle, while at the same time, other users that might need more compute power since they need to run more experiments and don’t have access to available resources because they are part of a static assignment,” Geller explained.
To solve that issue of static resource allocation, Run:AI came up with a solution to virtualize those GPU resources, whether on prem or in the cloud, and let IT define by policy how those resources should be divided.
“There is a need for a specific virtualization approaches for AI and actively managed orchestration and scheduling of those GPU resources, while providing the visibility and control over those compute resources to IT organizations and AI administrators,” he said.
Run:AI creates a resource pool, which allocates based on need. Image Credits Run:AI
Run:AI built a solution to bridge this gap between the resources IT is providing to data science teams and what they require to run a given job, while still giving IT some control over defining how that works.
“We really help companies get much more out of their infrastructure, and we do it by really abstracting the hardware from the data science, meaning you can simply run your experiment without thinking about the underlying hardware, and at any moment in time you can consume as much compute power as you need,” he said.
While the company is still in its early stages, and the current economic situation is hitting everyone hard, Geller sees a place for a solution like Run:AI because it gives customers the capacity to make the most out of existing resources, while making data science teams run more efficiently.
He also is taking a realistic long view when it comes to customer acquisition during this time. “These are challenging times for everyone,” he says. “We have plans for longer time partnerships with our customers that are not optimized for short term revenues.”
Run:AI was founded in 2018. It has raised $13 million, according to Geller. The company is based in Israel with offices in the United States. It currently has 25 employees and a few dozen customers.
It was not long ago that the world watched World Chess Champion Garry Kasparov lose a decisive match against a supercomputer. IBM’s Deep Blue embodied the state of the art in the late 1990s, when a machine defeating a world (human) champion at a complex game such as chess was still unheard of.
Fast-forward to today, and not only have supercomputers greatly surpassed Deep Blue in chess, they have managed to achieve superhuman performance in a string of other games, often much more complex than chess, ranging from Go to Dota to classic Atari titles.
Many of these games have been mastered just in the last five years, pointing to a pace of innovation much quicker than the two decades prior. Recently, Google released work on Agent57, which for the first time showcased superior performance over existing benchmarks across all 57 Atari 2600 games.
The class of AI algorithms underlying these feats — deep-reinforcement learning — has demonstrated the ability to learn at very high levels in constrained domains, such as the ones offered by games.
The exploits in gaming have provided valuable insights (for the research community) into what deep-reinforcement learning can and cannot do. Running these algorithms has required gargantuan compute power as well as fine-tuning of the neural networks involved in order to achieve the performance we’ve seen.
Researchers are pursuing new approaches such as multi-environment training and the use of language modeling to help enable learning across multiple domains, but there remains an open question of whether deep-reinforcement learning takes us closer to the mother lode — artificial general intelligence (AGI) — in any extensible way.
While the talk of AGI can get quite philosophical quickly, deep-reinforcement learning has already shown great performance in constrained environments, which has spurred its use in areas like robotics and healthcare, where problems often come with defined spaces and rules where the techniques can be effectively applied.
In robotics, it has shown promising results in using simulation environments to train robots for the real world. It has performed well in training real-world robots to perform tasks such as picking and how to walk. It’s being applied to a number of use cases in healthcare, such as personalized medicine, chronic care management, drug discovery and resource scheduling and allocation. Other areas that are seeing applications have included natural language processing, computer vision, algorithmic optimization and finance.
The research community is still early in fully understanding the potential of deep-reinforcement learning, but if we are to go by how well it has done in playing games in recent years, it’s likely we’ll be seeing even more interesting breakthroughs in other areas shortly.
If you’ve ever navigated a corn maze, your brain at an abstract level has been using reinforcement learning to help you figure out the lay of the land by trial and error, ultimately leading you to find a way out.
Welcome to this edition of The Operators, a recurring Extra Crunch column, podcast and YouTube show that brings you insights and information from inside the top tech companies. Our guests are execs with operational experience at both fast-rising startups, like Brex, Calm, DocSend, and Zeus Living, and more established companies, like AirBnB, Facebook, Google, and Uber. Here they share strategies and tactics for building your first a company and charting your career in tech.
Our two guests for this episode have very different backgrounds, one an experienced exec serving at a large digital media unicorn and the other a younger co-founder CEO of an upstart media business. But both are at rapidly growing companies who are at the forefront of what it means to be a media company today. Both experts from the online media industry have built successful careers and businesses in this age of social media and ready-to-go, instant news.
Rich Jarislowsky began his media career as a journalist for the Wall Street Journal before becoming the national political editor as a White House correspondent. He was instrumental in bringing The Wall Street Journal online years ago. For the past 25 years, Rich has been involved in digital news at wsj.com and Bloomberg, and is currently at Smart News, where he is Chief Journalist and the VP of Content.
Sam Parr is the co-founder and CEO of The Hustle, a beloved and rapidly growing newsletter, conference convener, and broadening digital media business.
Our discussion touched on some of the most important questions in digital media: