FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Mirantis releases its first major update to Docker Enterprise

By Frederic Lardinois

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year, and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90% of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”

TechCrunch’s top 16 picks from Techstars April virtual demo days

By Jonathan Shieber

Like other accelerators, Techstars, a network of more than 40 corporate and geographically targeted startup bootcamps, has had to bring its marquee demo day events online.

Over the last two weeks of April, industry-focused accelerators working with startups building businesses around mobility technologies (broadly) and the future of the home joined programs in Abu Dhabi, Bangalore, Berlin, Boston, Boulder and Chicago to present their cohorts.

Each group had roughly 10 companies pitching businesses that ran the gamut from early-childhood education to capturing precious metals from the waste streams of mining operations. There were language companies, security companies, marketing companies and even a maker of a modular sous vide product for home chefs.

The ideas were as creative as they were varied, and while all seemed promising, about two concepts from each batch stood out above the rest.

What follows is our completely unscientific picks of the top companies that pitched at each of these virtual Techstars demo days. In late May or early June, expect to see our roundup of the next batch of top picks from the their next round of demo days.

Hub71

Techstars’ inaugural cohort for its accelerator run in conjunction with Abu Dhabi-based technology incubator Hub71 included a number of novel businesses spanning climate, security, retail, healthcare and property tech. Standouts in this batch included Sia Secure and Aumet (with an honorable mention for the novel bio-based plastic processing and reuse technology developer, Poliloop).

Nvidia acquires Cumulus Networks

By Frederic Lardinois

Nvidia today announced its plans to acquire Cumulus Networks, an open-source centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.

The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.

Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013 and they formed a first official partnership in 2016.  Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.

Having both Cumulus and Mellanox in its stable will give Nvidia virtually all of the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1  billion in revenue in the last quarter, up 43 percent from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.

“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”

OctoML raises $15M to make optimizing ML models easier

By Frederic Lardinois

OctoML, a startup founded by the team behind the Apache TVM machine learning compiler stack project, today announced that it has raised a $15 million Series A round led by Amplify, with participation from Madrone Ventures, which led its $3.9 million seed round. The core idea behind OctoML and TVM is to use machine learning to optimize machine learning models so they can more efficiently run on different types of hardware.

“There’s been quite a bit of progress in creating machine learning models,” OctoML CEO and University of Washington professor Luis Ceze told me.” But a lot of the pain has moved to once you have a model, how do you actually make good use of it in the edge and in the clouds?”

That’s where the TVM project comes in, which was launched by Ceze and his collaborators at the University of Washington’s Paul G. Allen School of Computer Science & Engineering. It’s now an Apache incubating project and because it’s seen quite a bit of usage and support from major companies like AWS, ARM, Facebook, Google, Intel, Microsoft, Nvidia, Xilinx and others, the team decided to form a commercial venture around it, which became OctoML. Today, even Amazon Alexa’s wake word detection is powered by TVM.

Ceze described TVM as a modern operating system for machine learning models. “A machine learning model is not code, it doesn’t have instructions, it has numbers that describe its statistical modeling,” he said. “There’s quite a few challenges in making it run efficiently on a given hardware platform because there’s literally billions and billions of ways in which you can map a model to specific hardware targets. Picking the right one that performs well is a significant task that typically requires human intuition.”

And that’s where OctoML and its “Octomizer” SaaS product, which it also announced, today come in. Users can upload their model to the service and it will automatically optimize, benchmark and package it for the hardware you specify and in the format you want. For more advanced users, there’s also the option to add the service’s API to their CI/CD pipelines. These optimized models run significantly faster because they can now fully leverage the hardware they run on, but what many businesses will maybe care about even more is that these more efficient models also cost them less to run in the cloud, or that they are able to use cheaper hardware with less performance to get the same results. For some use cases, TVM already results in 80x performance gains.

Currently, the OctoML team consists of about 20 engineers. With this new funding, the company plans to expand its team. Those hires will mostly be engineers, but Ceze also stressed that he wants to hire an evangelist, which makes sense, given the company’s open-source heritage. He also noted that while the Octomizer is a good start, the real goal here is to build a more fully featured MLOps platform. “OctoML’s mission is to build the world’s best platform that automates MLOps,” he said.

ImmunityBio and Microsoft team up to precisely model how key COVID-19 protein leads to infection

By Darrell Etherington

An undertaking that involved combining massive amounts of graphics processing power could provide key leverage for researchers looking to develop potential cures and treatments for the novel coronavirus behind the current global pandemic. Immunotherapy startup ImmunityBio is working with Microsoft’s Azure to deliver a combined 24 petaflops of GPU computing capability for the purposes of modelling, in a very high degree of detail, the structure o the so-called “spike protein” that allows the SARS-CoV-2 virus that causes COVID-19 to enter human cells.

This new partnership means that they were able to produce a model of the spike protein within just days, instead of the months it would’ve taken previously. That time savings means that the model can get in the virtual hands of researchers and scientists working on potential vaccines and treatments even faster, and that they’ll be able to gear their work towards a detailed replication of the very protein they’re trying to prevent from attaching to the human ACE-2 proteins’ receptor, which is what sets up the viral infection process to begin with.

The main way that scientists working on treatments look to prevent or minimize the spread of the virus within the body is to block the attachment of the virus to these proteins, and the simplest way to do that is to ensure that the spike protein can’t connect with the receptor it targets. Naturally-occurring antibodies in patients who have recovered from the novel coronavirus do exactly that, and the vaccines under development are focused on doing the same thing pre-emptively, while many treatments are looking at lessening the ability of the virus to latch on to new cells as it replicates within the body.

In practical terms, the partnership between the two companies included a complement of 1,250 NVIDIA V100 Tensor Core GPUs designed for use in machine learning applications from a Microsoft Azure cluster, working with ImmunityBio’s existing 320 GPU cluster that is tuned specifically to molecular modeling work. The results of the collaboration will now be made available to researchers working on COVID-19 mitigation and prevention therapies, in the hopes that they will enable them to work more quickly and effectively towards a solution.

❌