When this editor first met Jeremy Conrad, it was in 2014, at the 8,000-square-foot former fish factory that was home to Lemnos, a hardware-focused venture firm that Conrad had cofounded three years earlier.
Conrad — who as a mechanical engineering undergrad at MIT worked on self driving cars, drones and satellites — was still excited about investing in hardware startups, having just closed a small new fund even while hardware was very unfashionable. One investment his team had made around that time was in Airware, a company that made subscription-based software for drones and attracted meaningful buzz and $118 million in venture funding before abruptly shutting down in 2018.
For his part, Conrad had already moved on, deciding in late 2017 that one of the many nascent teams that was camping out at Lemnos was on to a big idea relating the future of construction. Conrad didn’t have a background in real estate per se or even an earlier interest in the industry. But the “more I learned about it — not dissimilar to when I started Lemnos — It felt like there was a gap in the market, an opportunity that people were missing,” says Conrad from his home in San Francisco, where he has hunkered down throughout the COVID-19 crisis.
Enter Quartz, Conrad’s now 1.5-year-old, 14-person company, which quietly announced $7.75 million in Series A funding earlier this month, led by Baseline Ventures, with Felicis Ventures, Lemnos and Bloomberg Beta also participating.
What it’s selling to real estate developers, project managers and construction supervisors is really two things, which is safety and information. Using off-the-shelf hardware components that are reassembled in San Francisco and hardened (meaning secured to reduce vulnerabilities), the company incorporates its machine-learning software into this camera-based platform, then mounts the system onto cranes a construction sites. From there, the system streams 4K live feeds of what’s happening on the ground, while also making sense of the action.
Say dozens of concrete pouring trucks are expected on a construction site. The cameras, with their persistent view, can convey through a dashboard system whether and when the trucks have arrived and how many, says Conrad. It can determine how many people on are on a job site, and whether other deliveries have been made, even if not with a high degree of specificity. “We can’t say [to project managers] that 1,000 screws were delivered, but we can let them know whether the boxes they were expecting were delivered and where they were left,” he explains.
It’s an especially appealing proposition in the age of coronavirus, as the technology can help convey information that’s happening at a site that’s been shut down, or even how closely employees are gathered. Conrad says the technology also saves on time by providing information to those who might not otherwise be able to access it. Think of the developer who is on the 50th floor of the skyscraper he or she is building, or even the crane operator who is perhaps moving a two-ton object and has to rely on someone on the ground to deliver directions but can enjoy far more visibility with the aid of a multi-camera set-up.
Quartz, which today operates in California but is embarking on a nationwide rollout, was largely inspired by what Conrad was seeing in the world of self-driving. From sensors to self-perception systems, he knew the technologies would be even easier to deploy at construction sites, and he believed it could make them safer, too. Indeed, like cars, construction sites are astonishingly dangerous. According to the Occupational Safety and Health Administration, of the worker fatalities in private industry in 2018, more than 20% were in construction.
Conrad also saw an opportunity to take on established companies like Trimble, a 42-year-old, publicly traded, Sunnyvale, Ca.-based company that sells a portfolio of tools to the construction industry and charges top dollar for them, too. (Quartz is currently charging $2,000 per month per construction site for its series of cameras, their installation, a livestream and “lookback” data, though this may well rise at its adds additional features.)
It’s a big enough opportunity in fact, that Quartz is not alone in chasing it. Last summer, for example, Versatile, an Israeli-based startup with offices in San Francisco and New York City, raised $5.5 million in seed funding from Germany’s Robert Bosch Venture Capital and several other investors for a very similar platform, though it uses sensors mounted under the hook of a crane to provide information about what’s happening. Construction Dive, a media property that’s dedicated to the industry, highlights many other, similar and competitive startups in the space, too.
Still, Quartz has Conrad, who isn’t just any founding CEO. Not only does he have that background in engineering, but having founded a venture firm and spent years as an investor may serve him well, too. He thinks a lot about the payback period on its hardware, for example.
Unlike a lot of founders, he also says he loves the fundraising process. “I get the highest quality feedback from some of the smartest people I know, which really helps focus your vision,” says Conrad, who says that Quartz, which operates in California today, is now embarking on a nationwide rollout.
“When you talk with great VCs, they ask great questions. For me, it’s best free consulting you can get.”
Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.
Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.
But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers. Together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low-powered microcontrollers.
Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN. But building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between edge hardware and device intelligence now coming to fruition.
The race to automate vehicles on China’s roads is heating up. Didi, the Uber of China, announced this week an outsized investment of over $500 million in its freshly minted autonomous driving subsidiary. Leading the round — the single largest fundraising round in China’s autonomous driving sector — is its existing investor Softbank, the Japanese telecom giant and startup benefactor that has also backed Uber.
As China’s largest ride-hailing provider with mountains of traffic data, Didi clearly has an upper hand in developing robotaxis, which could help address driver shortage in the long term. But it was relatively late to the field. In 2018, Didi ranked eighth in kilometers of autonomous driving tests carried out in Beijing, far behind search giant Baidu which accounted for over 90% of the total mileage that year.
It’s since played aggressive catchup. Last August, it spun off its then three-year-old autonomous driving unit into an independent company to focus on R&D, building partnerships along the value chain, and promoting the futuristic technology to the government. The team now has a staff of 200 across its China and U.S. offices.
As an industry observer told me, “robotaxis will become a reality only when you have the necessary operational skills, technology and government support all in place.”
Didi is most famous for its operational efficiency, as facilitating safe and pleasant rides between drivers and passengers is no small feat. The company’s leadership hails from Alibaba’s legendary business-to-business sales team, also known as the “Alibaba Iron Army” for its ability in on-the-ground operation.
The autonomous segment can also benefit from Didi’s all-encompassing reach in the mobility industry. For instance, it’s working to leverage the parent company’s smart charging networks, fleet maintenance service and insurance programs for autonomous fleets.
The fresh capital will enable Didi’s autonomous business to improve safety — an area that became a focal point of the company after two deadly accidents — and efficiency through conducting R&D and road tests. The financing will also allow it to deepen industry cooperation and accelerate the deployment of robotaxi services in China and abroad.
Over the years, Didi has turned to traditional carmakers for synergies in what it dubs the “D-Alliance,” which counts more than 31 partners. It has applied autonomous driving technology to vehicles from Lincoln, Nissan, Volvo, BYD, to name a few.
Didi has secured open-road testing licenses in three major cities in China as well as California. It said last August that it aimed to begin picking up ride-hailing passengers with autonomous cars in Shanghai in a few months’ time. It’s accumulated 300,000 kilometers of road tests in China and the U.S. as of last August.
Even though it’s a vast sector in the midst of transformation, manufacturing is often overlooked by early-stage investors. We surveyed top VCs in the industry to gather their perspectives on the challenges and opportunities facing manufacturing.
Traditionally, manufacturing companies are capital-intensive and can be slow to implement new technology and processes. The investors in the survey below acknowledge the long-standing barriers facing founders in this space, yet they see large opportunities where startups can challenge incumbents.
These investors noted that the pandemic is bringing overnight change in the manufacturing world; old rules are being rewritten in the face of worker safety, remote work and the need for increased automation. According to Eclipse Ventures founder Lior Susan, “COVID-19 has exposed the systemic vulnerabilities inherent to manufacturing and supply chain and, as such, significant opportunities for innovation. The market was lukewarm for a long time — it’s time to turn up the heat.”
What trends are you most excited about in manufacturing from an investing perspective?
Digital solutions that offer manufacturers greater agility and resilience will become major areas of focus for investors. For example, manufacturers still reliant on manual assembly were unable to build products when factories closed due to the coronavirus lockdown. While nothing would have kept production at 100%, the ability to quickly pivot and engage software-defined processes would have allowed manufacturing lines to continue building with a skeleton crew (especially important for any facility required to implement social distancing). Such systems have remote monitoring capabilities and computer vision systems to flag defeats in real-time and halt production if necessary.
I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.
In this edition: a new type of laser emitter that uses metamaterials, robot-trained dogs, a breakthrough in neurological research that may advance prosthetic vision and other cutting-edge technology.
We think of lasers as going “straight” because that’s simpler than understanding their nature as groups of like-minded photons. But there are more exotic qualities for lasers beyond wavelengths and intensity, ones scientists have been trying to exploit for years. One such quality is… well, there are a couple names for it: Chirality, vorticality, spirality and so on — the quality of a beam having a corkscrew motion to it. Applying this quality effectively could improve optical data throughput speeds by an order of magnitude.
The trouble with such “twisted light” is that it’s very difficult to control and detect. Researchers have been making progress on this for a couple of years, but the last couple weeks brought some new advances.
First, from the University of the Witwatersrand, is a laser emitter that can produce twisted light of record purity and angular momentum — a measure of just how twisted it is. It’s also compact and uses metamaterials — always a plus.
The second is a pair of matched (and very multi-institutional) experiments that yielded both a transmitter that can send vortex lasers and, crucially, a receiver that can detect and classify them. It’s remarkably hard to determine the orbital angular momentum of an incoming photon, and hardware to do so is clumsy. The new detector is chip-scale and together they can use five pre-set vortex modes, potentially increasing the width of a laser-based data channel by a corresponding factor. Vorticality is definitely on the roadmap for next-generation network infrastructure, so you can expect startups in this space soon as universities spin out these projects.
A giant earthworm robot was not on the list of things I expected to see when I logged in this morning. But it’s here, and I’m here for it. Designed by a team at GE Research, the robot in question nabbed a $2.5 million award as part of DARPA’s Underminer. The program was created to foster rapid tunnel digging in military environments.
As is all the rage in robotics these days, the GE team turned to biological inspiration to execute the task. What they came up with is a large, segmented and soft robot that inches along like a giant mechanical earthworm.
The robot’s muscles are designed to mimic a “hydrostatic skeleton” — a fluid-filled structure found in invertebrates. In the case of the robot, it’s artificial muscles that do the hard work of moving it forward, with a design that makes it adaptable to different underground environments. The design offers a range of freedom of movement, along with the ability to squeeze into tight spaces.
Another key to success is building in the proper sensors that allow it to function autonomously underground. It can, after all, be difficult to remotely control a robot in such a scenario.
“Because these tunneling systems are underground, we need to be able to build in autonomous and sensing capabilities that enable our robot to move and tunnel in the right places,” project leader Deepak Trivedi said in a release. “Fortunately, we’re able to pull in controls, AI and sensing experts from across the Lab to help us integrate these new capabilities.”
The project is promising, but far from finished. The end goal is a robot that can dig a 500-meter tunnel and move at 10 cm/sec underground. The above lab video shot at GE’s Niskayuna, N.Y. is sped up 4x.
Sphero just announced that it has spun off another company. Once again, the new startup has a decidedly different focus from its parent company’s core of education-focused products. While still a robotics company at its heart, the underwhelmingly named Company Six will create robotic systems designed for first responders and other humans whose work requires them to put themselves in harm’s way.
Also snuck into the press into the press release almost as an an after thought, is the appointment of Paul Copioli as the new CEO of Sphero, effective immediately. The executive is an industry veteran, who has worked at VEX Robotics, industrial robotics giant Fanuc and Lockheed Martin. Most recently, he was the President and COO of LittleBits when the startup was acquired by Sphero.
Copioli takes over after the company’s exit from the consumer space. Sphero has pivoted almost entirely into the educational market, with the LittleBits acquisition making up an important piece of the puzzle.
“It’s an honor to lead the Sphero team as we continue to pave the way for accessible robots, STEAM and computer science education for kids around the world,” he says in a release “With our focus on education and our mission to inspire the creators of tomorrow, Sphero has a long-standing place in our school systems and beyond.”
Spinning Company Six off as its own independent entity is seemingly part of the new focus. The seeds of the startups were formed by former CEO Paul Berberian’s Public Safety Division within Sphero. He has since shifted to become Chairman of both companies, while former Sphero COO Jim Booth will head Company Six as COO. Got all that?
Company Six has already closed a $3 million seemed round, lead by Spider Capital, with Sphero investors Foundry Group and Techstars also on-board. Like previous Sphero spinoff Misty, information about Company Six is minimal at the time of its announcement. The new company’s site is essentially bare. We only know it will be focused on creating robotic systems for first responders, defense workers and other dangerous jobs. The news echoes iRobot’s 2016 spinoff of its military wing, Endeavor.
By applying the experience used to bring more than 4 million robots to market at Sphero, the Company Six team believes it can create products that are not only robust and feature-rich enough for professional applications, but also affordable enough to be adopted by the majority, rather than the minority, of civilian and military personnel.
More news to follow soon, no doubt.
Chinese robots will soon be seen roaming a number of warehouse floors across North America. Geek+, a well-funded Chinese robotics company that specializes in logistics automation for factories, warehouses and supply chains, furthers its expansion in North America after striking a strategic partnership with Conveyco, an order fulfillment and distribution center system integrator with operations across the continent.
Geek+ is seizing a massive opportunity in replacing repetitive warehouse work with unmanned robots, a demand that has surged during the coronavirus outbreak as logistics services around the world face labor shortage, respond to an uptick in e-commerce sales, and undertake disease prevention methods.
The partnership will bring Geek+’s autonomous mobile robots, or ARMs, to Conveyco’s clients in retail, ecommerce, omnichannel and logistics across North America. The deal will give a substantial boost to Geek’s overseas distribution while helping Conveyco to “improve efficiency, provide flexibility, and reduce costs associated with warehouse and logistics operations in various industries,” the partners said in a statement.
Beijing-based Geek+ so far operates 10,000 robots worldwide and employs some 800 employees, with offices in China, Germany, the U.K, the U.S., Japan, Hong Kong and Singapore. Some of its clients include Nike, Decathlon, Walmart and Dell.
Since founding in 2015, the company has raised about $390 million through five funding rounds, according to public data collected by Crunchbase, including a colossal $150 million round back in 2018 which it claimed was the largest-ever funding round for a logistics robotics startup. It counts Warburg Pincus, Vertex Ventures and GGV Capital among its list of investors.
During his Build keynote, Microsoft CEO Satya Nadella today confirmed that the company has acquired Softomotive, a software robotic automation platform. Bloomberg first reported that this acquisition was in the works earlier this month, but the two companies didn’t comment on the report at the time.
Today, Nadella noted that Softomotive would become part of Microsoft’s Power Automate platform. “We’re bringing RPA – or robotic process automation to legacy apps and services with our acquisition of Softomotive,” Nadella said.
Softomotive currently has about 9,000 customers around the world. Softomotive’s WinAutomation platform will be freely available to Power Automate users with what Microsoft calls an RPA attended license in Power Automate.
In Power Automate, Microsoft will use Softomotive’s tools to enable a number of new capabilities, including Softomotives low-code desktop automation solution WinAutomation. Until now, Power Automate did not feature any desktop automation tools.
It’ll also build Softomotive’s connectors for applications from SAP, as well as legacy terminal screens and Java, into its desktop automation experience and enable parallel execution and multitasking for UI automation.
Softomotives other flagship application, ProcessRobot for server-based enterprise RPA development, will also find a new home in Power Automate. My guess, though, is that Microsoft mostly bought the company for its desktop automation skills.
“One of our most distinguishing characteristics, and an indelible part of our DNA, is an unswerving commitment to usability,” writes Softomotive CEO and co-founder Marios Stavropoulos. “We have always believed in the notion of citizen developers and, since less than two percent of the world population can write code, we believe the greatest potential for both process improvement and overall innovation comes from business end users. This is why we have invested so diligently in abstracting complexity away from end users and created one of the industry’s most intuitive user interfaces – so that non-technical business end users can not just do more, but also make deeper contributions by becoming professional problem solvers and innovators. We are extremely excited to pursue this vision as part of Microsoft.”
The two companies did not disclose the financial details of the transaction.
If that name sounds familiar, it’s probably because you remember that Microsoft acquired Bonsai, a company that focuses on machine teaching, back in 2018. Bonsai combined simulation tools with different machine learning techniques to build a general-purpose deep reinforcement learning platform, with a focus on industrial control systems.
It’s maybe no surprise then that Project Bonsai, too, has a similar focus on helping businesses teach and manage their autonomous machines. “With Project Bonsai, subject-matter experts can add state-of-the-art intelligence to their most dynamic physical systems and processes without needing a background in AI,” the company notes in its press materials.
“The public preview of Project Bonsai builds on top of the Bonsai acquisition and the autonomous systems private preview announcements made at Build and Ignite of last year,” a Microsoft spokesperson told me.
Interestingly, Microsoft notes that project Bonsai is only the first block of a larger vision to help its customers build these autonomous systems. The company also stresses the advantages of machine teaching over other machine learning approach, especially the fact that it’s less of a black box approach than other methods, which makes it easier for developers and engineers to debug systems that don’t work as expected.
In addition to Bonsai, Microsoft also today announced Project Moab, an open-source balancing robot that is meant to help engineers and developers learn the basics of how to build a real-world control system. The idea here is to teach the robot to keep a ball balanced on top of a platform that is held by three arms.
Potential users will be able to either 3D print the robot themselves or buy one when it goes on sale later this year. There is also a simulation, developed by MathWorks, that developers can try out immediately.
“You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead,” said Mark Hammond, Microsoft General Manager
for Autonomous Systems. “The point of the Project Moab system is to provide that
playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”
What’s been overlooked in the wake of such workflow-specific tools has been the base class of products that enterprises are using to build the core of their machine learning (ML) workflows, and the shift in focus toward automating the deployment and governance aspects of the ML workflow.
That’s where MLOps comes in, and its popularity has been fueled by the rise of core ML workflow platforms such as Boston-based DataRobot. The company has raised more than $430 million and reached a $1 billion valuation this past fall serving this very need for enterprise customers. DataRobot’s vision has been simple: enabling a range of users within enterprises, from business and IT users to data scientists, to gather data and build, test and deploy ML models quickly.
Founded in 2012, the company has quietly amassed a customer base that boasts more than a third of the Fortune 50, with triple-digit yearly growth since 2015. DataRobot’s top four industries include finance, retail, healthcare and insurance; its customers have deployed over 1.7 billion models through DataRobot’s platform. The company is not alone, with competitors like H20.ai, which raised a $72.5 million Series D led by Goldman Sachs last August, offering a similar platform.
Why the excitement? As artificial intelligence pushed into the enterprise, the first step was to go from data to a working ML model, which started with data scientists doing this manually, but today is increasingly automated and has become known as “auto ML.” An auto-ML platform like DataRobot’s can let an enterprise user quickly auto-select features based on their data and auto-generate a number of models to see which ones work best.
As auto ML became more popular, improving the deployment phase of the ML workflow has become critical for reliability and performance — and so enters MLOps. It’s quite similar to the way that DevOps has improved the deployment of source code for applications. Companies such as DataRobot and H20.ai, along with other startups and the major cloud providers, are intensifying their efforts on providing MLOps solutions for customers.
We sat down with DataRobot’s team to understand how their platform has been helping enterprises build auto-ML workflows, what MLOps is all about and what’s been driving customers to adopt MLOps practices now.
“Assembly” may sound like one of the simpler tests in the manufacturing process, but as anyone who’s ever put together a piece of flat-pack furniture knows, it can be surprisingly (and frustratingly) complex. Invisible AI is a startup that aims to monitor people doing assembly tasks using computer vision, helping maintain safety and efficiency — without succumbing to the obvious all-seeing-eye pitfalls. A $3.6 million seed round ought to help get them going.
The company makes self-contained camera-computer units that run highly optimized computer vision algorithms to track the movements of the people they see. By comparing those movements with a set of canonical ones (someone performing the task correctly), the system can watch for mistakes or identify other problems in the workflow — missing parts, injuries and so on.
Obviously, right at the outset, this sounds like the kind of thing that results in a pitiless computer overseer that punishes workers every time they fall below an artificial and constantly rising standard — and Amazon has probably already patented that. But co-founder and CEO Eric Danziger was eager to explain that this isn’t the idea at all.
“The most important parts of this product are for the operators themselves. This is skilled labor, and they have a lot of pride in their work,” he said. “They’re the ones in the trenches doing the work, and catching and correcting mistakes is a big part of it.”
“These assembly jobs are pretty athletic and fast-paced. You have to remember the 15 steps you have to do, then move on to the next one, and that might be a totally different variation. The challenge is keeping all that in your head,” he continued. “The goal is to be a part of that loop in real time. When they’re about to move on to the next piece we can provide a double check and say, ‘Hey, we think you missed step 8.’ That can save a huge amount of pain. It might be as simple as plugging in a cable, but catching it there is huge — if it’s after the vehicle has been assembled, you’d have to tear it down again.”
This kind of body tracking exists in various forms and for various reasons; Veo Robotics, for instance, uses depth sensors to track an operator and robot’s exact positions to dynamically prevent collisions.
But the challenge at the industrial scale is less “how do we track a person’s movements in the first place” than “how can we easily deploy and apply the results of tracking a person’s movements.” After all, it does no good if the system takes a month to install and days to reprogram. So Invisible AI focused on simplicity of installation and administration, with no code needed and entirely edge-based computer vision.
“The goal was to make it as easy to deploy as possible. You buy a camera from us, with compute and everything built in. You install it in your facility, you show it a few examples of the assembly process, then you annotate them. And that’s less complicated than it sounds,” Danziger explained. “Within something like an hour they can be up and running.”
Once the camera and machine learning system is set up, it’s really not such a difficult problem for it to be working on. Tracking human movements is a fairly straightforward task for a smart camera these days, and comparing those movements to an example set is comparatively easy, as well. There’s no “creativity” involved, like trying to guess what a person is doing or match it to some huge library of gestures, as you might find in an AI dedicated to captioning video or interpreting sign language (both still very much works in progress elsewhere in the research community).
As for privacy and the possibility of being unnerved by being on camera constantly, that’s something that has to be addressed by the companies using this technology. There’s a distinct possibility for good, but also for evil, like pretty much any new tech.
One of Invisible’s early partners is Toyota, which has been both an early adopter and skeptic when it comes to AI and automation. Their philosophy, one that has been arrived at after some experimentation, is one of empowering expert workers. A tool like this is an opportunity to provide systematic improvement that’s based on what those workers already do.
It’s easy to imagine a version of this system where, like in Amazon’s warehouses, workers are pushed to meet nearly inhuman quotas through ruthless optimization. But Danziger said that a more likely outcome, based on anecdotes from companies he’s worked with already, is more about sourcing improvements from the workers themselves.
Having built a product day in and day out year after year, these are employees with deep and highly specific knowledge on how to do it right, and that knowledge can be difficult to pass on formally. “Hold the piece like this when you bolt it or your elbow will get in the way” is easy to say in training but not so easy to make standard practice. Invisible AI’s posture and position detection could help with that.
“We see less of a focus on cycle time for an individual, and more like, streamlining steps, avoiding repetitive stress, etc.,” Danziger said.
Importantly, this kind of capability can be offered with a code-free, compact device that requires no connection except to an intranet of some kind to send its results to. There’s no need to stream the video to the cloud for analysis; footage and metadata are both kept totally on-premise if desired.
Like any compelling new tech, the possibilities for abuse are there, but they are not — unlike an endeavor like Clearview AI — built for abuse.
“It’s a fine line. It definitely reflects the companies it’s deployed in,” Danziger said. “The companies we interact with really value their employees and want them to be respected and engaged in the process as possible. This helps them with that.”
The $3.6 million seed round was led by 8VC, with participating investors including iRobot Corporation, K9 Ventures, Sierra Ventures and Slow Ventures.
Covariant this week announced that it has raised a $40 million Series B, led by Index Ventures. The funding brings the three-year-old Berkeley startup’s total funding up to $67 million. Co-founded by top UC Berkeley professor Pieter Abbeel, the company is dedicated to building autonomy for industrial robotics.
It’s a category that’s growing hotter than ever as more companies look toward robotics and automation as potential ways forward amid the COVID-19 pandemic. Covariant came out of stealth in January, announcing that it had already deployed its technology to real-world facilities in Europe and North America. In March, the company announced a partnership with top industrial robotics company ABB.
“When we founded Covariant, our goal was to make AI Robotics work autonomously in the real world,” Abbeel says in a release tied to the news. “Having reached that milestone, we see a huge benefit in expanding our universal AI to new use cases, customer environments and industries.”
The new funding will be used to grow Covariant’s headcount and explore additional categories for its tech.
It was not long ago that the world watched World Chess Champion Garry Kasparov lose a decisive match against a supercomputer. IBM’s Deep Blue embodied the state of the art in the late 1990s, when a machine defeating a world (human) champion at a complex game such as chess was still unheard of.
Fast-forward to today, and not only have supercomputers greatly surpassed Deep Blue in chess, they have managed to achieve superhuman performance in a string of other games, often much more complex than chess, ranging from Go to Dota to classic Atari titles.
Many of these games have been mastered just in the last five years, pointing to a pace of innovation much quicker than the two decades prior. Recently, Google released work on Agent57, which for the first time showcased superior performance over existing benchmarks across all 57 Atari 2600 games.
The class of AI algorithms underlying these feats — deep-reinforcement learning — has demonstrated the ability to learn at very high levels in constrained domains, such as the ones offered by games.
The exploits in gaming have provided valuable insights (for the research community) into what deep-reinforcement learning can and cannot do. Running these algorithms has required gargantuan compute power as well as fine-tuning of the neural networks involved in order to achieve the performance we’ve seen.
Researchers are pursuing new approaches such as multi-environment training and the use of language modeling to help enable learning across multiple domains, but there remains an open question of whether deep-reinforcement learning takes us closer to the mother lode — artificial general intelligence (AGI) — in any extensible way.
While the talk of AGI can get quite philosophical quickly, deep-reinforcement learning has already shown great performance in constrained environments, which has spurred its use in areas like robotics and healthcare, where problems often come with defined spaces and rules where the techniques can be effectively applied.
In robotics, it has shown promising results in using simulation environments to train robots for the real world. It has performed well in training real-world robots to perform tasks such as picking and how to walk. It’s being applied to a number of use cases in healthcare, such as personalized medicine, chronic care management, drug discovery and resource scheduling and allocation. Other areas that are seeing applications have included natural language processing, computer vision, algorithmic optimization and finance.
The research community is still early in fully understanding the potential of deep-reinforcement learning, but if we are to go by how well it has done in playing games in recent years, it’s likely we’ll be seeing even more interesting breakthroughs in other areas shortly.
If you’ve ever navigated a corn maze, your brain at an abstract level has been using reinforcement learning to help you figure out the lay of the land by trial and error, ultimately leading you to find a way out.
To this day, the Roomba remains the one successful mainstream home robotic. It was a feat iRobot achieved after a decade or so of throwing concepts against the wall, ultimately achieving success through a combination of outward-facing simplicity and under-the-hood sophistication.
That other robots haven’t achieved those heights isn’t for lack of trying. The field has been littered with promising, high-profile failures. Notable recent additions to the dust bin of social home robotics history include Anki and Jibo. Both projects had interest and funding, but were ultimately unable to sustain.
Founded in Los Angeles in early 2016 by USC robotics professor Maja Matarić and former iRobot CTO Paolo Pirjanian, Embodied is the latest company to swing for the fences here. The startup has backing from Intel Capital, Toyota AI Ventures, Amazon Alexa Fund, Sony Innovation Fund, JAZZ Venture Partners, Calibrate Ventures, Osage University Partners and Grishin Robotics.
Taking a page from Anki’s Cozmo playbook, the company has enlisted the help of employees from Pixar and Jim Henson to flesh out the real-world robotic character. At first glance, the results are plenty impressive.
The company has posted a series of videos featuring Moxie, highlighting an extremely expressive facial display coupled with animatronic body movements. Given its focus on childhood education/development, Embodied has also employed the help of neuroscientists and child development specialists to flesh out the product, which is set to launch in beta to start.
“We’re at a tipping point in the way we will interact with technology,” Pirjanian, the company’s CEO, said in statement. “At Embodied, we have been rethinking and reinventing how human-machine interaction is done beyond simple verbal commands, to enable the next generation of computing, and to power a new class of machines capable of fluid social interaction. Moxie is a new type of robot that has the ability to understand and express emotions with emotive speech, believable facial expressions and body language, tapping into human psychology and neurology to create deeper bonds.”
The robot focuses on a different theme each week, including kindness, friendship, empathy and respect, personalizing content to a child over time. Moxie looks to be an impressive take on the category, though things are still very early stages here. Along with all of the built-in difficulties that come with launching a home robot, the device is price-prohibitive at $1,499. If you’re up for the risk, reservations are open now and Moxie will start shipping in the fall.
MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has released a video of their ongoing work using input from muscle signals to control devices. Their latest involves full and fine control of drones, using just hand and arm gestures to navigate through a series of rings. This work is impressive not just because they’re using biofeedback to control the devices, instead of optical or other kinds of gesture recognition, but also because of how specific the controls can be, setting up a range of different potential applications for this kind of remote tech.
This particular group of researchers has been looking at different applications for this tech, including its use in collaborative robotics for potential industrial applications. Drone piloting is another area that could have big benefits in terms of real-world use, especially once you start to imagine entire flocks of these taking flight with a pilot provided a view of what they can see via VR. That could be a great way to do site surveying for construction, for example, or remote equipment inspection of offshore platforms and other infrastructure that’s hard for people to reach.
Seamless robotic/human interaction is the ultimate goal of the team working on this tech, because just like how we intuit our own movements and ability to manipulate our environment most effectively, they believe the process should be as smooth when controlling and working with robots. Thinking and doing are essentially happening in parallel when we interact with our environment, but when we act through the extension of machines or remote tools, there’s often something lost in translation that results in a steep learning curve and the requirement of lots of training.
Cobotics, or the industry that focuses on building robots that can safely work alongside and in close collaboration with robots, would benefit greatly from advances that make the interaction between people and robotic equipment more natural, instinctive and, ultimately, safe. MIT’s research in this area could result in future industrial robotics products that require less training and programming to operate at scale.
For two weeks, Boston Dynamics’ Spot robot has been walking the halls of local hospital Brigham and Women’s. Telemedicine wasn’t generally listed as one of the primary applications for the company’s first commercial product, but Boston Dynamics is only one in a long list of tech companies that’s found itself shifting on the fly as the COVID-19 pandemic has become an all-consuming part of life.
The company says hospitals have been reaching out since early March, asking if there might be a way to incorporate its technology to help with remote health.
“Based on the outreach we received as well as the global shortage of critical personal protective equipment (PPE), we have spent the past several weeks trying to better understand hospital requirements to develop a mobile robotics solution with our Spot robot,” the company writes. “The result is a legged robot that can be deployed to support frontline staff responding to the pandemic in ad-hoc environments such as triage tents and parking lots.”
Fitted with an iPad and a two-way radio, Spot is being used as a mobile teleconferencing system, allowing doctors to check in on patients without risking the spread of the highly contagious virus. It’s a fairly simple task — and one that a number of robotics companies have actively cracked.
While ultimately price-prohibitive for many healthcare facilities, Spot’s four-legged locomotion makes it possible for the robot to visit areas inaccessible for wheeled systems. The modularity always means it has the potential to accomplish further tasks. Boston Dynamics says it’s working on outfitting the robot with a system to detect vital signs like temperature, respiratory rate, pulse rate and oxygen saturation.
In the future, a UV light could also be mounted to the robot’s back to serve as a mobile disinfecting station.
For the past month, VC investment pace seems to have slacked off in the U.S., but deal activities in China are picking up following a slowdown prompted by the COVID-19 outbreak.
According to PitchBook, “Chinese firms recorded 66 venture capital deals for the week ended March 28, the most of any week in 2020 and just below figures from the same time last year,” (although 2019 was a slow year). There is a natural lag between when deals are made and when they are announced, but still, there are some interesting trends that I couldn’t help noticing.
While many U.S.-based VCs haven’t had a chance to focus on new deals, recent investment trends coming out of China may indicate which shifts might persist after the crisis and what it could mean for the U.S. investor community.
Image Credits: PitchBook
Just like SARS in 2003 coincided with the launch of Taobao and pushed Jingdong to transform from an offline retailer into online giant JD.com, there may be dark horses waiting to break out when this pandemic is over. Paraphrasing “A Tale of Two Cities” — this is the worst time, but also maybe the best time.
In addition to obvious trends about food delivery, digital content, gaming and other sectors, I have identified five other trends that are closing the most deals in Q1 on the early-stage side that could define the post-pandemic environment: e-commerce, edtech, robotics and advanced manufacturing, healthcare IT/life Science and AI/enterprise SaaS.