FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — April 22nd 2021Your RSS feeds

First findings with Apple’s new AirTag location devices

By Matthew Panzarino

I’ve been playing around with Apple’s new AirTag location devices for a few hours now and they seem to work pretty much as advertised. The setup flow is simple and clean, taking clear inspiration from the one Apple developed for AirPods. The precision finding feature enabled by the U1 chip works as a solid example of utility-driven augmented reality, popping up a virtual arrow and other visual identifiers on the screen to make finding a tag quicker.

The basic way that AirTags work, if you’re not familiar, is that they use Bluetooth beaconing technology to announce their presence to any nearby devices running iOS 14.5 and above. These quiet pings are encrypted and invisible (usually) to any passer by, especially if they are with their owners. This means that no one ever knows what device actually ‘located’ your AirTag, not even Apple.

With you, by the way, means in relative proximity to a device signed in to the iCloud account that the AirTags are registered to. Bluetooth range is typically in the ~40 foot range depending on local conditions and signal bounce. 

In my very limited testing so far, AirTag location range fits in with that basic Bluetooth expectation. Which means that it can be foiled by a lot of obstructions or walls or an unflattering signal bounce. It often took 30 seconds or more to get an initial location from an AirTag in another room, for instance. Once the location was received, however, the instructions to locate the device seemed to update quickly and were extremely accurate down to a few inches.

The AirTags run for a year on a standard CR2032 battery that’s user replaceable. They offer some water resistance including submersion for some time. There are a host of accessories that seem nicely designed like leather straps for bags, luggage tags and key rings.

So far so good. More testing to come. 

Some protections

As with anything to do with location, security and privacy are a top of mind situation for AirTags, and Apple has some protections in place.

You cannot share AirTags — they are meant to be owned by one person. The only special privileges offered by people in your iCloud Family Sharing Group is that they can silence the ‘unknown AirTag nearby’ alerts indefinitely. This makes AirTags useful for things like shared sets of keys or maybe even a family pet. This means that AirTags will not show up on your family Find My section like other iOS devices might. There is now a discrete section within the app just for ‘Items’ including those with Find My functionality built in. 

The other privacy features include a ‘warning’ that will trigger after some time that a tag is in your proximity and NOT in the proximity of its owner (aka, traveling with you perhaps in a bag or car). Your choices are then to make the tag play a sound to locate it — look at its information including serial number and to disable it by removing its battery. 

Any AirTag that has been away from its owner for a while — this time is variable and Apple will tweak it over time as it observes how AirTags work — will start playing a sound whenever it is moved. This will alert people to its presence. 

You can, of course, also place an AirTag into Lost Mode, offering a choice to share personal information with anyone who locates it as it plays an alert sound. Anyone with any smart device with NFC, Android included, can tap the device to see a webpage with information that you choose to share. Or just a serial number if you do not choose to do so. 

This scenario addresses what happens if you don’t have an iOS device to alert you to a foreign AirTag in your presence, as it will eventually play a sound even if it is not in lost mode and the owner has no control over that.

It’s clear that Apple has thought through many of the edge cases, but some could still crop up as it rolls out, we’ll have to see.

Apple has some distinct market advantages here:

  • Nearly a billion devices out in the world that can help to locate an AirTag.
  • A built-in U1 wideband chip that communicates with a similar U1 chip in iPhones to enable super precise (down to inches) location.
  • A bunch of privacy features that don’t appear on competing tags.

Important to note that Apple has announced the development of a specification for chipset makers that lets third-party devices with Ultra Wideband radios access the U1 chip onboard iPhones ‘later this Spring’. This should approximate the Precision Finding feature’s utility in accessories that don’t have the advantage of having a U1 built in like the AirTags do. And, of course, Apple has opened up the entire Find My mesh network to third party devices from Belkin, Chipolo and VanMoof that want to offer a similar basic finding function as offered by AirTags. Tile has announced plans to offer a UWB version of its tracker as well, even as it testified in Congress yesterday that Apple’s advantages made its entry into this market unfair. 

It will be interesting to see these play out once AirTags are out getting lost in the wild. I have had them for under 12 hours so I’ve not been able to test edge cases, general utility in public spaces or anything like that. 

The devices go on sale on April 23rd.

Window Snyder’s new startup Thistle Technologies raises $2.5M seed to secure IoT devices

By Zack Whittaker

The Internet of Things has a security problem. The past decade has seen wave after wave of new internet-connected devices, from sensors through to webcams and smart home tech, often manufactured in bulk but with little — if any — consideration to security. Worse, many device manufacturers make no effort to fix security flaws, while others simply leave out the software update mechanisms needed to deliver patches altogether.

That sets up an entire swath of insecure and unpatchable devices to fail, and destined to be thrown out when they break down or are invariably hacked.

Security veteran Window Snyder thinks there is a better way. Her new startup, Thistle Technologies, is backed with $2.5 million in seed funding from True Ventures with the goal of helping IoT manufacturers reliably and securely deliver software updates to their devices.

Snyder founded Thistle last year, and named it after the flowering plant with sharp prickles designed to deter animals from eating them. “It’s a defense mechanism,” Snyder told TechCrunch, a name that’s fitting for a defensive technology company. The startup aims to help device manufacturers without the personnel or resources to integrate update mechanisms into their device’s software in order to receive security updates and better defend against security threats.

“We’re building the means so that they don’t have to do it themselves. They want to spend the time building customer-facing features anyway,” said Snyder. Prior to founding Thistle, Snyder worked in senior cybersecurity positions at Apple, Intel, and Microsoft, and also served as chief security officer at Mozilla, Square, and Fastly.

Thistle lands on the security scene at a time when IoT needs it most. Botnet operators are known to scan the internet for devices with weak default passwords and hijack their internet connections to pummel victims with floods of internet traffic, knocking entire websites and networks offline. In 2016, a record-breaking distributed denial-of-service attack launched by the Mirai botnet on internet infrastructure giant Dyn knocked some of the biggest websites — Shopify, SoundCloud, Spotify, Twitter — offline for hours. Mirai had ensnared thousands of IoT devices into its network at the time of the attack.

Other malicious hackers target IoT devices as a way to get a foot into a victim’s network, allowing them to launch attacks or plant malware from the inside.

Since device manufacturers have done little to solve their security problems among themselves, lawmakers are looking at legislating to curb some of the more egregious security mistakes made by default manufacturers, like using default — and often unchangeable — passwords and selling devices with no way to deliver security updates.

California paved the way after passing an IoT security law in 2018, with the U.K. following shortly after in 2019. The U.S. has no federal law governing basic IoT security standards.

Snyder said the push to introduce IoT cybersecurity laws could be “an easy way for folks to get into compliance” without having to hire fleets of security engineers. Having an update mechanism in place also helps to keeps the IoT devices around for longer — potentially for years longer — simply by being able to push fixes and new features.

“To build the infrastructure that’s going to allow you to continue to make those devices resilient and deliver new functionality through software, that’s an incredible opportunity for these device manufacturers. And so I’m building a security infrastructure company to support that security needs,” she said.

With the seed round in the bank, Snyder said the company is focused on hiring device and back-end engineers, product managers, and building new partnerships with device manufacturers.

Phil Black, co-founder of True Ventures — Thistle’s seed round investor — described the company as “an astute and natural next step in security technologies.” He added: “Window has so many of the qualities we look for in founders. She has deep domain expertise, is highly respected within the security community, and she’s driven by a deep passion to evolve her industry.”

As UiPath closes above its final private valuation, CFO Ashim Gupta discusses his company’s path to market

By Alex Wilhelm

After an upward revision, UiPath priced its IPO last night at $56 per share, a few dollars above its raised target range. The above-range price meant that the unicorn put more capital into its books through its public offering.

For a company in a market as competitive as robotic process automation (RPA), the funds are welcome. In fact, RPA has been top of mind for startups and established companies alike over the last year or so. In that time frame, enterprise stalwarts like SAP, Microsoft, IBM and ServiceNow have been buying smaller RPA startups and building their own, all in an effort to muscle into an increasingly lucrative market.

In June 2019, Gartner reported that RPA was the fastest-growing area in enterprise software, and while the growth has slowed down since, the sector is still attracting attention. UIPath, which Gartner found was the market leader, has been riding that wave, and today’s capital influx should help the company maintain its market position.

It’s worth noting that when the company had its last private funding round in February, it brought home $750 million at an impressive valuation of $35 billion. But as TechCrunch noted over the course of its pivot to the public markets, that round valued the company above its final IPO price. As a result, this week’s $56-per-share public offer wound up being something of a modest down-round IPO to UiPath’s final private valuation.

Then, a broader set of public traders got hold of its stock and bid its shares higher. The former unicorn’s shares closed their first day’s trading at precisely $69, above the per-share price at which the company closed its final private round.

So despite a somewhat circuitous route, UiPath closed its first day as a public company worth more than it was in its Series F round — when it sold 12,043,202 shares sold at $62.27576 apiece, per SEC filings. More simply, UiPath closed today worth more per-share than it was in February.

How you might value the company, whether you prefer a simple or fully-diluted share count, is somewhat immaterial at this juncture. UiPath had a good day.

While it’s hard to know what the company might do with the proceeds, chances are it will continue to try to expand its platform beyond pure RPA, which could become market-limited over time as companies look at other, more modern approaches to automation. By adding additional automation capabilities — organically or via acquisitions — the company can begin covering broader parts of its market.

TechCrunch spoke with UiPath CFO Ashim Gupta today, curious about the company’s choice of a traditional IPO, its general avoidance of adjusted metrics in its SEC filings, and the IPO market’s current temperature. The final question was on our minds, as some companies have pulled their public listings in the wake of a market described as “challenging”.

Why did UiPath not direct list after its huge February raise?

Before yesterdayYour RSS feeds

The IPO market is sending us mixed messages

By Alex Wilhelm

If you only stayed up to date with the Coinbase direct listing this week, you’re forgiven. It was, after all, one heck of a flotation.

But underneath the cryptocurrency exchange’s public debut, other IPO news that matters did happen this week. And the news adds up to a somewhat muddled picture of the current IPO market.

To cap off the week, let’s run through IPO news from UiPath, Coinbase, Grab, AppLovin and Zenvia. The aggregate dataset should help you form your own perspective about where today’s IPO markets really are in terms of warmth for the often-unprofitable unicorns of the world.

Recall that we’re in the midst of a slightly more turbulent IPO window than we saw during the last quarter. After seemingly watching every company’s IPO price above-range and then charge higher on opening day, several companies pulled their offerings as the second quarter started. It was a surprise.

Since then we’ve seen Compass go public, but not at quite the level of performance it might have anticipated, and, then, this week, much has happened.

What follows is a mini-digest of IPO news from the week, tagged with our best read of just how bullish (or not) the happening really was:

Should Dell have pursued a more aggressive debt-reduction move with VMware?

By Ron Miller

When Dell announced it was spinning out VMware yesterday, the move itself wasn’t surprising; there had been public speculation for some time. But Dell could have gone a number of ways in this deal, despite its choice to spin VMware out as a separate company with a constituent dividend instead of an outright sale.

The dividend route, which involves a payment to shareholders between $11.5 billion and $12 billion, has the advantage of being tax-free (or at least that’s what Dell hopes as it petitions the IRS). For Dell, which owns 81% of VMware, the dividend translates to somewhere between $9.3 billion and $9.7 billion in cash, which the company plans to use to pay down a portion of the huge debt it still holds from its $58 billion EMC purchase in 2016.

Dell hopes to have its cake and eat it too with this deal: It generates a large slug of cash to use for personal debt relief while securing a five-year commercial deal that should keep the two companies closely aligned.

VMware was the crown jewel in that transaction, giving Dell an inroad to the cloud it had lacked prior to the deal. For context, VMware popularized the notion of the virtual machine, a concept that led to the development of cloud computing as we know it today. It has since expanded much more broadly beyond that, giving Dell a solid foothold in cloud native computing.

Dell hopes to have its cake and eat it too with this deal: It generates a large slug of cash to use for personal debt relief while securing a five-year commercial deal that should keep the two companies closely aligned. Dell CEO Michael Dell will remain chairman of the VMware board, which should help smooth the post-spinout relationship.

But could Dell have extracted more cash out of the deal?

Doing what’s best for everyone

Patrick Moorhead, principal analyst at Moor Insights and Strategies, says that beyond the cash transaction, the deal provides a way for the companies to continue working closely together with the least amount of disruption.

“In the end, this move is more about maximizing the Dell and VMware stock price [in a way that] doesn’t impact customers, ISVs or the channel. Wall Street wasn’t valuing the two companies together nearly as [strongly] as I believe it will as separate entities,” Moorhead said.

Billion-dollar B2B: cloud-first enterprise tech behemoths have massive potential

By Annie Siebert
Dharmesh Thakker Contributor
Dharmesh Thakker is a general partner at Battery Ventures and a former managing director at Intel Capital.

More than half a decade ago, my Battery Ventures partner Neeraj Agrawal penned a widely read post offering advice for enterprise-software companies hoping to reach $100 million in annual recurring revenue.

His playbook, dubbed “T2D3” — for “triple, triple, double, double, double,” referring to the stages at which a software company’s revenue should multiply — helped many high-growth startups index their growth. It also highlighted the broader explosion in industry value creation stemming from the transition of on-premise software to the cloud.

Fast forward to today, and many of T2D3’s insights are still relevant. But now it’s time to update T2D3 to account for some of the tectonic changes shaping a broader universe of B2B tech — and pushing companies to grow at rates we’ve never seen before.

One of the biggest factors driving billion-dollar B2Bs is a simple but important shift in how organizations buy enterprise technology today.

I call this new paradigm “billion-dollar B2B.” It refers to the forces shaping a new class of cloud-first, enterprise-tech behemoths with the potential to reach $1 billion in ARR — and achieve market capitalizations in excess of $50 billion or even $100 billion.

In the past several years, we’ve seen a pioneering group of B2B standouts — Twilio, Shopify, Atlassian, Okta, Coupa*, MongoDB and Zscaler, for example — approach or exceed the $1 billion revenue mark and see their market capitalizations surge 10 times or more from their IPOs to the present day (as of March 31), according to CapIQ data.

More recently, iconic companies like data giant Snowflake and video-conferencing mainstay Zoom came out of the IPO gate at even higher valuations. Zoom, with 2020 revenue of just under $883 million, is now worth close to $100 billion, per CapIQ data.

Graphic showing market cap at IPO and market cap today of various companies.

Image Credits: Battery Ventures via FactSet. Note that market data is current as of April 3, 2021.

In the wings are other B2B super-unicorns like Databricks* and UiPath, which have each raised private financing rounds at valuations of more than $20 billion, per public reports, which is unprecedented in the software industry.

C2i, a genomics SaaS product to detect traces of cancer, raises $100M Series B

By Marcella McCarthy

If you or a loved one has ever undergone a tumor removal as part of cancer treatment, you’re likely familiar with the period of uncertainty and fear that follows. Will the cancer return, and if so, will the doctors catch it at an early enough stage? C2i Genomics has developed software that’s 100x more sensitive in detecting residual disease, and investors are pouncing on the potential. Today, C2i announced a $100 million Series B led by Casdin Capital. 

“The biggest question in cancer treatment is, ‘Is it working?’ Some patients are getting treatment they don’t benefit from and they are suffering the side effects while other patients are not getting the treatment they need,” said Asaf Zviran, co-founder and CEO of C2i Genomics in an interview.

Historically, the main approach to cancer detection post-surgery has been through the use of MRI or X-ray, but neither of those methods gets super accurate until the cancer progresses to a certain point. As a result, a patient’s cancer may return, but it may be a while before doctors are able to catch it.

Using C2i’s technology, doctors can order a liquid biopsy, which is essentially a blood draw that looks for DNA. From there they can sequence the entire genome and upload it to the C2i platform. The software then looks at the sequence and identifies faint patterns that indicate the presence of cancer, and can inform if it’s growing or shrinking.

“C2i is basically providing the software that allows the detection and monitoring of cancer to a global scale. Every lab with a sequencing machine can process samples, upload to the C2i platform and provide detection and monitoring to the patient,” Zviran told TechCrunch.

C2i Genomics’ solution is based on research performed at the New York Genome Center (NYGC) and Weill Cornell Medicine (WCM) by Dr. Zviran, along with Dr. Dan Landau, faculty member at the NYGC and assistant professor of medicine at WCM, who serves as scientific co-founder and member of C2i’s scientific advisory board. The research and findings have been published in the medical journal, Nature Medicine.

While the product is not FDA-approved yet, it’s already being used in clinical research and drug development research at NYU Langone Health, the National Cancer Center of Singapore, Aarhus University Hospital and Lausanne University Hospital.

When and if approved, New York-based C2i has the potential to drastically change cancer treatment, including in the areas of organ preservation. For example, some people have functional organs, such as the bladder or rectum, removed to prevent cancer from returning, leaving them disabled. But what if the unnecessary surgeries could be avoided? That’s one goal that Zviran and his team have their minds set on achieving.

For Zviran, this story is personal. 

“I started my career very far from cancer and biology, and at the age of 28 I was diagnosed with cancer and I went for surgery and radiation. My father and then both of my in-laws were also diagnosed, and they didn’t survive,” he said.

Zviran, who today has a PhD in molecular biology, was previously an engineer with the Israeli Defense Force and some private companies. “As an engineer, looking into this experience, it was very alarming to me about the uncertainty on both the patients’ and physicians’ side,” he said.

This round of funding will be used to accelerate clinical development and commercialization of the company’s C2-Intelligence Platform. Other investors that participated in the round include NFX, Duquesne Family Office, Section 32 (Singapore), iGlobe Partners and Driehaus Capital.

Cado Security locks in $10M for its cloud-native digital forensics platform

By Ingrid Lunden

As computing systems become increasingly bigger and more complex, forensics have become an increasingly important part of how organizations can better secure them. As the recent Solar Winds breach has shown, it’s not always just a matter of being able to identify data loss, or prevent hackers from coming in in the first place. In cases where a network has already been breached, running a thorough investigation is often the only way to identify what happened, if a breach is still active, and whether a malicious hacker can strike again.

As a sign of this growing priority, a startup called Cado Security, which has built forensics technology native to the cloud to run those investigations, is announcing $10 million in funding to expand its business.

Cado’s tools today are used directly by organizations, but also security companies like Redacted — a somewhat under-the-radar security startup in San Francisco co-founded by Facebook’s former chief security officer Max Kelly and John Hering, the co-founder of Lookout. It uses Cado to carry out the forensics part of its work.

The funding for London-based Cado is being led by Blossom Capital, with existing investors Ten Eleven Ventures also participating, among others. As another signal of demand, this Series A is coming only six months after Cado raised its seed round.

The task of securing data on digital networks has grown increasingly complex over the years: not only are there more devices, more data and a wider range of configurations and uses around it, but malicious hackers have become increasingly sophisticated in their approaches to needling inside networks and doing their dirty work.

The move to the cloud has also been a major factor. While it has helped a wave of organizations expand and run much bigger computing processes are part of their business operations, it has also increased the so-called attack surface and made investigations much more complicated, not least because a lot of organizations run elastic processes, scaling their capacity up and down: this means when something is scaled down, logs of previous activity essentially disappear.

Cado’s Response product — which works proactively on a network and all of its activity after it’s installed — is built to work across cloud, on-premise and hybrid environments. Currently it’s available for AWS EC2 deployments and Docker, Kubernetes, OpenShift and AWS Fargate container systems, and the plan is to expand to Azure very soon. (Google Cloud Platform is less of a priority at the moment, CEO James Campbell said, since it rarely comes up with current and potential customers.)

Campbell co-founded Cado with Christopher Doman (the CTO) last April, with the concept for the company coming out of their respective experiences working on security services together at PwC, and respectively for government organizations (Campbell in Australia) and AlienVault (the security firm acquired by AT&T). In all of those, one persistent issue the two continued to encounter was the issue with adequate forensics data, essential for tracking the most complex breaches.

A lot of legacy forensics tools, in particular those tackling the trove of data in the cloud, was based on “processing data with open source and pulling together analysis in spreadsheets,” Campbell said. “There is a need to modernize this space for the cloud era.”

In a typical breach, it can take up to a month to run a thorough investigation to figure out what is going on, since, as Doman describes it, forensics looks at “every part of the disk, the files in a binary system. You just can’t find what you need without going to that level, those logs. We would look at the whole thing.”

However, that posed a major problem. “Having a month with a hacker running around before you can do something about it is just not acceptable,” Campbell added. The result, typically, is that other forensics tools investigate only about 5% of an organization’s data.

The solution — for which Cado has filed patents, the pair said — has essentially involved building big data tools that can automate and speed up the very labor intensive process of looking through activity logs to figure out what looks unusual and to find patterns within all the ones and zeros.

“That gives security teams more room to focus on what the hacker is getting up to, the remediation aspect,” Campbell explained.

Arguably, if there were better, faster tracking and investigation technology in place, something like Solar Winds could have been better mitigated.

The plan for the company is to bring in more integrations to cover more kinds of systems, and go beyond deployments that you’d generally classify as “infrastructure as a service.”

“Over the past year, enterprises have compressed their cloud adoption timelines while protecting the applications that enable their remote workforces,” said Imran Ghory, partner at Blossom Capital, in a statement. “Yet as high-profile breaches like SolarWinds illustrate, the complexity of cloud environments makes rapid investigation and response extremely difficult since security analysts typically are not trained as cloud experts. Cado Security solves for this with an elegant solution that automates time-consuming tasks like capturing forensically sound cloud data so security teams can move faster and more efficiently. The opportunity to help Cado Security scale rapidly is a terrific one for Blossom Capital.”

Cloud kitchen startup JustKitchen to go public on the TSX Venture Exchange

By Catherine Shu

JustKitchen, a cloud kitchen startup, will start trading on the Toronto Stock Exchange (TSX) Venture Exchange on Thursday morning. It is doing a direct listing of its common shares, having already raised $8 million at a $30 million valuation.

The company says this makes it one of the first—if not the first—cloud kitchen company to go public in North America. While JustKitchen launched operations last year in Taiwan, it is incorporated in Canada, with plans to expand into Hong Kong, Singapore, the Philippines and the United States. TSX Venture is a board on the Toronto Stock Exchange for emerging companies, including startups, that can move to the main board once they reach certain thresholds depending on industry.

“It’s a really convenient way to get into the market and with the ghost kitchen industry in particular, it’s early stage and there’s a lot of runway,” co-founder and chief executive officer Jason Chen told TechCrunch. “We felt there really was a need to get going as quickly as we could and really get out into the market.”

Participants in JustKitchen’s IPO rounds included returning investor SparkLabs Taipei (JustKitchen took part in its accelerator program last year), investment institutions and retail clients from Toronto. More than half of JustKitchen’s issued and outstanding shares are owned by its executives, board directors and employees, Chen said.

One of the reasons JustKitchen decided to list on TSX Venture Exchange is Chen’s close ties to the Canadian capital markets, where he worked as an investment banker before moving to Taiwan to launch the startup. A couple of JustKitchen’s board members are also active in the Canadian capital markets, including Darren Devine, a member of TSX Venture Exchange’s Local Advisory Committee.

These factors made listing on the board a natural choice for JustKitchen, Chen told TechCrunch. Other reasons included ability to automatically graduate to the main TSX board once companies pass certain thresholds, including market cap and net profitability, and the ease of doing dual listings in other countries. Just Kitchen is also preparing to list its common shares on the OTCQB exchange in the U.S. and the Frankfurt Stock Exchange in Germany.

Dell is spinning out VMware in a deal expected to generate over $9B for the company

By Ron Miller

Dell announced this afternoon that it’s spinning out VMware, a move that has been suspected for some time. Dell, which acquired VMware as part of the massive $67 billion EMC acquisition in 2015, owns approximately 80% of the stock and the company is expected to receive between $9.3 and $9.7 billion when the deal closes later this year.

Even when it was part of EMC, VMware had a special status in that it operates as a separate entity with its own executive team, board of directors and the stock has been sold separately as well.

“Both companies will remain important partners, providing Dell Technologies with a differentiated advantage in how we bring solutions to customers. At the same time, Dell Technologies will continue to modernize its core infrastructure and PC businesses and embrace new opportunities through an open ecosystem to grow in hybrid and private cloud, edge and telecom,” Dell CEO Michael Dell said in a statement.

While there is a lot of CEO speak in that statement, it appears to mean that the move is mostly administrative as the companies will continue to work closely together, even after the spin off is official. Dell will remain as chairman of both companies. What’s more, the company plans to use the cash proceeds from the deal to help pay down the massive debt it still has left over from the EMC deal.

This is a breaking story. We will have more soon.

Grocery startup Mercato spilled years of data, but didn’t tell its customers

By Zack Whittaker

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

PlexTrac raises $10M Series A round for its collaboration-centric security platform

By Frederic Lardinois

PlexTrac, a Boise, ID-based security service that aims to provide a unified workflow automation platform for red and blue teams, today announced that it has raised a $10 million Series A funding round led by Noro-Moseley Partners and Madrona Venture Group. StageDot0 ventures also participated in this round, which the company plans to use to build out its team and grow its platform.

With this new round, the company, which was founded in 2018, has now raised a total of $11 million, with StageDot0 leading its 2019 seed round.

PlexTrac CEO and President Dan DeCloss

PlexTrac CEO and President Dan DeCloss

“I have been on both sides of the fence, the specialist who comes in and does the assessment, produces that 300-page report and then comes back a year later to find that some of the critical issues had not been addressed at all.  And not because the organization didn’t want to but because it was lost in that report,” PlexTrac CEO and President Dan DeCloss said. “These are some of the most critical findings for an entity from a risk perspective. By making it collaborative, both red and blue teams are united on the same goal we all share, to protect the network and assets.”

With an extensive career in security that included time as a penetration tester for Veracode and the Mayo Clinic, as well as senior information security advisor for Anthem, among other roles, DeCloss has quite a bit of first-hand experience that led him to found PlexTrac. Specifically, he believes that it’s important to break down the wall between offense-focused red teams and defense-centric blue teams.

Image Credits: PlexTrac

 

 

“Historically there has been more of the cloak and dagger relationship but those walls are breaking down– and rightfully so, there isn’t that much of that mentality today– people recognize they are on the same mission whether they are internal security team or an external team,” he said. “With the PlexTrac platform the red and blue teams have a better view into the other teams’ tactics and techniques – and it makes the whole process into an educational exercise for everyone.”

At its core, PlexTrac makes it easier for security teams to produce their reports — and hence free them up to actually focus on ‘real’ security work. To do so, the service integrates with most of the popular scanners like Qualys, and Veracode, but also tools like ServiceNow and Jira in order to help teams coordinate their workflows. All the data flows into real-time reports that then help teams monitor their security posture. The service also features a dedicated tool, WriteupsDB, for managing reusable write-ups to help teams deliver consistent reports for a variety of audiences.

“Current tools for planning, executing, and reporting on security testing workflows are either nonexistent (manual reporting, spreadsheets, documents, etc…) or exist as largely incomplete features of legacy platforms,” Madrona’s S. Somasegar and Chris Picardo write in today’s announcement. “The pain point for security teams is real and PlexTrac is able to streamline their workflows, save time, and greatly improve output quality. These teams are on the leading edge of attempting to find and exploit vulnerabilities (red teams) and defend and/or eliminate threats (blue teams).”

 

Upstack raises $50M for its platform and advisory to help businesses plan and buy for digital transformation

By Ingrid Lunden

Digital transformation has been one of the biggest catchphrases of the past year, with many an organization forced to reckon with aging IT, a lack of digital strategy, or simply the challenges of growth after being faced with newly-remote workforces, customers doing everything online and other tech demands.

Now, a startup called Upstack that has built a platform to help those businesses evaluate how to grapple with those next steps — including planning and costing out different options and scenarios, and then ultimately buying solutions — is announcing financing to do some growth of its own.

The New York startup has picked up funding of $50 million, money that it will be using to continue building out its platform and expanding its services business.

The funding is coming from Berkshire Partners, and it’s being described as an “initial investment”. The firm, which makes private equity and late-stage growth investments, typically puts between $100 million and $1 billion in its portfolio companies so this could end up as a bigger number, especially when you consider the size of the market that Upstack is tackling: the cloud and internet infrastructure brokerage industry generates annual revenues “in excess of $70 billion,” the company estimates.

We’re asking about the valuation, but PitchBook notes that the median valuation in its deals is around $211 million. Upstack had previously raised around $35 million.

Upstack today already provides tools to large enterprises, government organizations, and smaller businesses to compare offerings and plan out pricing for different scenarios covering a range of IT areas, including private, public and hybrid cloud deployments; data center investments; network connectivity; business continuity and mobile services, and the plan is to bring in more categories to the mix, including unified communications and security.

Notably, Upstack itself is profitable and names a lot of customers that themselves are tech companies — they include Cisco, Accenture, cloud storage company Backblaze, Riverbed and Lumen — a mark of how digital transformation and planning for it are not necessarily a core competency even of digital businesses, but especially those that are not technology companies. It says it has helped complete over 3,700 IT projects across 1,000 engagements to date.

“Upstack was founded to bring enterprise-grade advisory services to businesses of all sizes,” said Christopher Trapp, founder and CEO, in a statement. “Berkshire’s expertise in the data center, connectivity and managed services sectors aligns well with our commitment to enabling and empowering a world-class ecosystem of technology solutions advisors with a platform that delivers higher value to their customers.”

The core of the Upstack’s proposition is a platform that system integrators, or advisors, plus end users themselves, can use to design and compare pricing for different services and solutions. This is an unsung but critical aspect of the ecosystem: We love to hear and write about all the interesting enterprise technology that is being developed, but the truth of the matter is that buying and using that tech is never just a simple click on a “buy” button.

Even for smaller organizations, buying tech can be a hugely time-consuming task. It involves evaluating different companies and what they have to offer — which can differ widely in the same category, and gets more complex when you start to compare different technological approaches to the same problem.

It also includes the task of designing solutions to fit one’s particular network. And finally, there are the calculations that need to be made to determine the real cost of services once implemented in an organization. It also gives users the ability to present their work, which also forms a critical part of the evaluating and decision-making process. When you think about all of this, it’s no wonder that so many organizations have opted to follow the “if it ain’t broke, don’t fix it” school of digital strategy.

As technology has evolved, the concept of digital transformation itself has become more complicated, making tools like Upstack’s more in demand both by companies and the people they hire to do this work for them. Upstack also employs a group of about 15 advisors — consultants — who also provide insight and guidance in the procurement process, and it seems some of the funding will also be used to invest in expanding that team.

(Incidentally, the model of balancing technology with human experts is one used by other enterprise startups that are built around the premise of helping businesses procure technology: BlueVoyant, a security startup that has built a platform to help businesses manage and use different security services, also retains advisors who are experts in that field.)

The advisors are part of the business model: Upstack’s customers can either pay Upstack a consulting fee to work with its advisors, or Upstack receives a commission from suppliers that a company ends up using, having evaluated and selected them via the Upstack platform.

The company competes with traditional systems integrators and consultants, but it seems that the fact that it has built a tech platform that some of its competitors also use is one reason why it’s caught the eye of investors, and also seen strong growth.

Indeed, when you consider the breadth of services that a company might use within their infrastructure — whether it’s software to run sales or marketing, or AI to run a recommendation for products on a site, or business intelligence or RPA — it will be interesting to see how and if Upstack considers deeper moves into these areas.

“Upstack has quickly become a leader in a large, rapidly growing and highly fragmented market,” said Josh Johnson, principal at Berkshire Partners, in a statement. “Our experience has reinforced the importance of the agent channel to enterprises designing and procuring digital infrastructure. Upstack’s platform accelerates this digital transformation by helping its advisors better serve their enterprise customers. We look forward to supporting Upstack’s continued growth through M&A and further investment in the platform.”

China’s Xpeng in the race to automate EVs with lidar

By Rita Liao

Elon Musk famously said any company relying on lidar is “doomed.” Tesla instead believes automated driving functions are built on visual recognition and is even working to remove the radar. China’s Xpeng begs to differ.

Founded in 2014, Xpeng is one of China’s most celebrated electric vehicle startups and went public when it was just six years old. Like Tesla, Xpeng sees automation as an integral part of its strategy; unlike the American giant, Xpeng uses a combination of radar, cameras, high-precision maps powered by Alibaba, localization systems developed in-house, and most recently, lidar to detect and predict road conditions.

“Lidar will provide the 3D drivable space and precise depth estimation to small moving obstacles even like kids and pets, and obviously, other pedestrians and the motorbikes which are a nightmare for anybody who’s working on driving,” Xinzhou Wu, who oversees Xpeng’s autonomous driving R&D center, said in an interview with TechCrunch.

“On top of that, we have the usual radar which gives you location and speed. Then you have the camera which has very rich, basic semantic information.”

Xpeng is adding lidar to its mass-produced EV model P5, which will begin delivering in the second half of this year. The car, a family sedan, will later be able to drive from point A to B based on a navigation route set by the driver on highways and certain urban roads in China that are covered by Alibaba’s maps. An older model without lidar already enables assisted driving on highways.

The system, called Navigation Guided Pilot, is benchmarked against Tesla’s Navigate On Autopilot, said Wu. It can, for example, automatically change lanes, enter or exit ramps, overtake other vehicles, and maneuver another car’s sudden cut-in, a common sight in China’s complex road conditions.

“The city is super hard compared to the highway but with lidar and precise perception capability, we will have essentially three layers of redundancy for sensing,” said Wu.

By definition, NGP is an advanced driver-assistance system (ADAS) as drivers still need to keep their hands on the wheel and take control at any time (Chinese laws don’t allow drivers to be hands-off on the road). The carmaker’s ambition is to remove the driver, that is, reach Level 4 autonomy two to four years from now, but real-life implementation will hinge on regulations, said Wu.

“But I’m not worried about that too much. I understand the Chinese government is actually the most flexible in terms of technology regulation.”

The lidar camp

Musk’s disdain for lidar stems from the high costs of the remote sensing method that uses lasers. In the early days, a lidar unit spinning on top of a robotaxi could cost as much as $100,000, said Wu.

“Right now, [the cost] is at least two orders low,” said Wu. After 13 years with Qualcomm in the U.S., Wu joined Xpeng in late 2018 to work on automating the company’s electric cars. He currently leads a core autonomous driving R&D team of 500 staff and said the force will double in headcount by the end of this year.

“Our next vehicle is targeting the economy class. I would say it’s mid-range in terms of price,” he said, referring to the firm’s new lidar-powered sedan.

The lidar sensors powering Xpeng come from Livox, a firm touting more affordable lidar and an affiliate of DJI, the Shenzhen-based drone giant. Xpeng’s headquarters is in the adjacent city of Guangzhou about 1.5 hours’ drive away.

Xpeng isn’t the only one embracing lidar. Nio, a Chinese rival to Xpeng targeting a more premium market, unveiled a lidar-powered car in January but the model won’t start production until 2022. Arcfox, a new EV brand of Chinese state-owned carmaker BAIC, recently said it would be launching an electric car equipped with Huawei’s lidar.

Musk recently hinted that Tesla may remove radar from production outright as it inches closer to pure vision based on camera and machine learning. The billionaire founder isn’t particularly a fan of Xpeng, which he alleged owned a copy of Tesla’s old source code.

In 2019, Tesla filed a lawsuit against Cao Guangzhi alleging that the former Tesla engineer stole trade secrets and brought them to Xpeng. XPeng has repeatedly denied any wrongdoing. Cao no longer works at Xpeng.

Supply challenges

While Livox claims to be an independent entity “incubated” by DJI, a source told TechCrunch previously that it is just a “team within DJI” positioned as a separate company. The intention to distance from DJI comes as no one’s surprise as the drone maker is on the U.S. government’s Entity List, which has cut key suppliers off from a multitude of Chinese tech firms including Huawei.

Other critical parts that Xpeng uses include NVIDIA’s Xavier system-on-the-chip computing platform and Bosch’s iBooster brake system. Globally, the ongoing semiconductor shortage is pushing auto executives to ponder over future scenarios where self-driving cars become even more dependent on chips.

Xpeng is well aware of supply chain risks. “Basically, safety is very important,” said Wu. “It’s more than the tension between countries around the world right now. Covid-19 is also creating a lot of issues for some of the suppliers, so having redundancy in the suppliers is some strategy we are looking very closely at.”

Taking on robotaxis

Xpeng could have easily tapped the flurry of autonomous driving solution providers in China, including Pony.ai and WeRide in its backyard Guangzhou. Instead, Xpeng becomes their competitor, working on automation in-house and pledges to outrival the artificial intelligence startups.

“The availability of massive computing for cars at affordable costs and the fast dropping price of lidar is making the two camps really the same,” Wu said of the dynamics between EV makers and robotaxi startups.

“[The robotaxi companies] have to work very hard to find a path to a mass-production vehicle. If they don’t do that, two years from now, they will find the technology is already available in mass production and their value become will become much less than today’s,” he added.

“We know how to mass-produce a technology up to the safety requirement and the quarantine required of the auto industry. This is a super high bar for anybody wanting to survive.”

Xpeng has no plans of going visual-only. Options of automotive technologies like lidar are becoming cheaper and more abundant, so “why do we have to bind our hands right now and say camera only?” Wu asked.

“We have a lot of respect for Elon and his company. We wish them all the best. But we will, as Xiaopeng [founder of Xpeng] said in one of his famous speeches, compete in China and hopefully in the rest of the world as well with different technologies.”

5G, coupled with cloud computing and cabin intelligence, will accelerate Xpeng’s path to achieve full automation, though Wu couldn’t share much detail on how 5G is used. When unmanned driving is viable, Xpeng will explore “a lot of exciting features” that go into a car when the driver’s hands are freed. Xpeng’s electric SUV is already available in Norway, and the company is looking to further expand globally.

Risk startup LogicGate confirms data breach

By Zack Whittaker

Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.

An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.

The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.

“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.

LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.

But LogicGate has not made any public statement about the breach. It’s also not clear if LogicGate contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.

We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developers directly to our customers,” he said.

Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.

Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.

In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.


Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

Meroxa raises $15M Series A for its real-time data platform

By Frederic Lardinois

Meroxa, a startup that makes it easier for businesses to build the data pipelines to power both their analytics and operational workflows, today announced that it has raised a $15 million Series A funding round led by Drive Capital. Existing investors Root, Amplify and Hustle Fund also participated in this round, which together with the company’s previously undisclosed $4.2 million seed round now brings total funding in the company to $19.2 million.

The promise of Meroxa is that can use a single platform for their various data needs and won’t need a team of experts to build their infrastructure and then manage it. At its core, Meroxa provides a single Software-as-a-Service solution that connects relational databases to data warehouses and then helps businesses operationalize that data.

Image Credits: Meroxa

“The interesting thing is that we are focusing squarely on relational and NoSQL databases into data warehouse,” Meroxa co-founder and CEO DeVaris Brown told me. “Honestly, people come to us as a real-time FiveTran or real-time data warehouse sink. Because, you know, the industry has moved to this [extract, load, transform] format. But the beautiful part about us is, because we do change data capture, we get that granular data as it happens.” And businesses want this very granular data to be reflected inside of their data warehouses, Brown noted, but he also stressed that Meroxa can expose this stream of data as an API endpoint or point it to a Webhook.

The company is able to do this because its core architecture is somewhat different from other data pipeline and integration services that, at first glance, seem to offer a similar solution. Because of this, users can use the service to connect different tools to their data warehouse but also build real-time tools on top of these data streams.

Image Credits: Meroxa

“We aren’t a point-to-point solution,” Meroxa co-founder and CTO Ali Hamidi explained. “When you set up the connection, you aren’t taking data from Postgres and only putting it into Snowflake. What’s really happening is that it’s going into our intermediate stream. Once it’s in that stream, you can then start hanging off connectors and say, ‘Okay, well, I also want to peek into the stream, I want to transfer my data, I want to filter out some things, I want to put it into S3.”

Because of this, users can use the service to connect different tools to their data warehouse but also build real-time tools to utilize the real-time data stream. With this flexibility, Hamidi noted, a lot of the company’s customers start with a pretty standard use case and then quickly expand into other areas as well.

Brown and Hamidi met during their time at Heroku, where Brown was a director of product management and Hamidi a lead software engineer. But while Heroku made it very easy for developers to publish their web apps, there wasn’t anything comparable in the highly fragmented database space. The team acknowledges that there are a lot of tools that aim to solve these data problems, but few of them focus on the user experience.

Image Credits: Meroxa

“When we talk to customers now, it’s still very much an unsolved problem,” Hamidi said. “It seems kind of insane to me that this is such a common thing and there is no ‘oh, of course you use this tool because it addresses all my problems.’ And so the angle that we’re taking is that we see user experience not as a nice-to-have, it’s really an enabler, it is something that enables a software engineer or someone who isn’t a data engineer with 10 years of experience in wrangling Kafka and Postgres and all these things. […] That’s a transformative kind of change.”

It’s worth noting that Meroxa uses a lot of open-source tools but the company has also committed to open-sourcing everything in its data plane as well. “This has multiple wins for us, but one of the biggest incentives is in terms of the customer, we’re really committed to having our agenda aligned. Because if we don’t do well, we don’t serve the customer. If we do a crappy job, they can just keep all of those components and run it themselves,” Hamidi explained.

Today, Meroxa, which the team founded in early 2020, has over 24 employees (and is 100% remote). “I really think we’re building one of the most talented and most inclusive teams possible,” Brown told me. “Inclusion and diversity are very, very high on our radar. Our team is 50% black and brown. Over 40% are women. Our management team is 90% underrepresented. So not only are we building a great product, we’re building a great company, we’re building a great business.”  

Zoho launches new low code workflow automation product

By Ron Miller

Workflow automation has been one of the key trends this year so far, and Zoho, a company known for its suite of affordable business tools has joined the parade with a new low code workflow product called Qntrl (pronounced control).

Zoho’s Rodrigo Vaca, who is in charge of Qntrl’s marketing says that most of the solutions we’ve been seeing are built for larger enterprise customers. Zoho is aiming for the mid-market with a product that requires less technical expertise than traditional business process management tools.

“We enable customers to design their workflows visually without the need for any particular kind of prior knowledge of business process management notation or any kind of that esoteric modeling or discipline,” Vaca told me.

While Vaca says, Qntrl could require some technical help to connect a workflow to more complex backend systems like CRM or ERP, it allows a less technical end user to drag and drop the components and then get help to finish the rest.

“We certainly expect that when you need to connect to NetSuite or SAP you’re going to need a developer. If nothing else, the IT guys are going to ask questions, and they will need to provide access,” Vaca said.

He believes this product is putting this kind of tooling in reach of companies that may have been left out of workflow automation for the most part, or which have been using spreadsheets or other tools to create crude workflows. With Qntrl, you drag and drop components, and then select each component and configure what happens before, during and after each step.

What’s more, Qntrl provides a central place for processing and understanding what’s happening within each workflow at any given time, and who is responsible for completing it.

We’ve seen bigger companies like Microsoft, SAP, ServiceNow and others offering this type of functionality over the last year as low code workflow automation has taken center stage in business.

This has become a more pronounced need during the pandemic when so many workers could not be in the office. It made moving work in a more automated workflow more imperative, and we have seen companies moving to add more of this kind of functionality as a result.

Brent Leary, principal analyst at CRM Essentials, says that Zoho is attempting to remove some the complexity from this kind of tool.

“It handles the security pieces to make sure the right people have access to the data and processes used in the workflows in the background, so regular users can drag and drop to build their flows and processes without having to worry about that stuff,” Leary told me.

Zoho Qntrl is available starting today starting at just $7 per user month.

Microsoft goes all in on healthcare with $19.7B Nuance acquisition

By Ron Miller

When Microsoft announced it was acquiring Nuance Communications this morning for $19.7 billion, you could be excused for doing a Monday morning double take at the hefty price tag.

That’s surely a lot of money for a company on a $1.4 billion run rate, but Microsoft, which has already partnered with the speech-to-text market leader on several products over the last couple of years, saw a company firmly embedded in healthcare and it decided to go all in.

And $20 billion is certainly all in, even for a company the size of Microsoft. But 2020 forced us to change the way we do business from restaurants to retailers to doctors. In fact, the pandemic in particular changed the way we interact with our medical providers. We learned very quickly that you don’t have to drive to an office, wait in waiting room, then in an exam room, all to see the doctor for a few minutes.

Instead, we can get on the line, have a quick chat and be on our way. It won’t work for every condition of course — there will always be times the physician needs to see you — but for many meetings such as reviewing test results or for talk therapy, telehealth could suffice.

Microsoft CEO Satya Nadella says that Nuance is at the center of this shift, especially with its use of cloud and artificial intelligence, and that’s why the company was willing to pay the amount it did to get it.

“AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud in Healthcare and Nuance,” Nadella said in a post announcing the deal.

Microsoft sees this deal doubling what was already a considerable total addressable market to nearly $500 billion. While TAMs always tend to run high, that is still a substantial number.

It also fits with Gartner data, which found that by 2022, 75% of healthcare organizations will have a formal cloud strategy in place. The AI component only adds to that number and Nuance brings 10,000 existing customers to Microsoft including some of the biggest healthcare organizations in the world.

Brent Leary, founder and principal analyst at CRM Essentials, says the deal could provide Microsoft with a ton of health data to help feed the underlying machine learning models and make them more accurate over time.

“There is going be a ton of health data being captured by the interactions coming through telemedicine interactions, and this could create a whole new level of health intelligence,” Leary told me.

That of course could drive a lot of privacy concerns where health data is involved, and it will be up to Microsoft, which just experienced a major breach on its Exchange email server products last month, to assure the public that their sensitive health data is being protected.

Leary says that ensuring data privacy is going to be absolutely key to the success of the deal. “The potential this move has is pretty powerful, but it will only be realized if the data and insights that could come from it are protected and secure — not only protected from hackers but also from unethical use. Either could derail what could be a game changing move,” he said.

Microsoft also seemed to recognize that when it wrote, “Nuance and Microsoft will deepen their existing commitments to the extended partner ecosystem, as well as the highest standards of data privacy, security and compliance.”

We are clearly on the edge of a sea change when it comes to how we interact with our medical providers in the future. COVID pushed medicine deeper into the digital realm in 2020 out of simple necessity. It wasn’t safe to go into the office unless absolutely necessary.

The Nuance acquisition, which is expected to close some time later this year, could help Microsoft shift deeper into the market. It could even bring Teams into it as a meeting tool, but it’s all going to depend on the trust level people have with this approach, and it will be up to the company to make sure that both healthcare providers and the people they serve have that.

NLPCloud.io helps devs add language processing smarts to their apps

By Natasha Lomas

While visual ‘no code‘ tools are helping businesses get more out of computing without the need for armies of in-house techies to configure software on behalf of other staff, access to the most powerful tech tools — at the ‘deep tech’ AI coal face — still requires some expert help (and/or costly in-house expertise).

This is where bootstrapping French startup, NLPCloud.io, is plying a trade in MLOps/AIOps — or ‘compute platform as a service’ (being as it runs the queries on its own servers) — with a focus on natural language processing (NLP), as its name suggests.

Developments in artificial intelligence have, in recent years, led to impressive advances in the field of NLP — a technology that can help businesses scale their capacity to intelligently grapple with all sorts of communications by automating tasks like Named Entity Recognition, sentiment-analysis, text classification, summarization, question answering, and Part-Of-Speech tagging, freeing up (human) staff to focus on more complex/nuanced work. (Although it’s worth emphasizing that the bulk of NLP research has focused on the English language — meaning that’s where this tech is most mature; so associated AI advances are not universally distributed.)

Production ready (pre-trained) NLP models for English are readily available ‘out of the box’. There are also dedicated open source frameworks offering help with training models. But businesses wanting to tap into NLP still need to have the DevOps resource and chops to implement NLP models.

NLPCloud.io is catering to businesses that don’t feel up to the implementation challenge themselves — offering “production-ready NLP API” with the promise of “no DevOps required”.

Its API is based on Hugging Face and spaCy open-source models. Customers can either choose to use ready-to-use pre-trained models (it selects the “best” open source models; it does not build its own); or they can upload custom models developed internally by their own data scientists — which it says is a point of differentiation vs SaaS services such as Google Natural Language (which uses Google’s ML models) or Amazon Comprehend and Monkey Learn.

NLPCloud.io says it wants to democratize NLP by helping developers and data scientists deliver these projects “in no time and at a fair price”. (It has a tiered pricing model based on requests per minute, which starts at $39pm and ranges up to $1,199pm, at the enterprise end, for one custom model running on a GPU. It does also offer a free tier so users can test models at low request velocity without incurring a charge.)

“The idea came from the fact that, as a software engineer, I saw many AI projects fail because of the deployment to production phase,” says sole founder and CTO Julien Salinas. “Companies often focus on building accurate and fast AI models but today more and more excellent open-source models are available and are doing an excellent job… so the toughest challenge now is being able to efficiently use these models in production. It takes AI skills, DevOps skills, programming skill… which is why it’s a challenge for so many companies, and which is why I decided to launch NLPCloud.io.”

The platform launched in January 2021 and now has around 500 users, including 30 who are paying for the service. While the startup, which is based in Grenoble, in the French Alps, is a team of three for now, plus a couple of independent contractors. (Salinas says he plans to hire five people by the end of the year.)

“Most of our users are tech startups but we also start having a couple of bigger companies,” he tells TechCrunch. “The biggest demand I’m seeing is both from software engineers and data scientists. Sometimes it’s from teams who have data science skills but don’t have DevOps skills (or don’t want to spend time on this). Sometimes it’s from tech teams who want to leverage NLP out-of-the-box without hiring a whole data science team.”

“We have very diverse customers, from solo startup founders to bigger companies like BBVA, Mintel, Senuto… in all sorts of sectors (banking, public relations, market research),” he adds.

Use cases of its customers include lead generation from unstructured text (such as web pages), via named entities extraction; and sorting support tickets based on urgency by conducting sentiment analysis.

Content marketers are also using its platform for headline generation (via summarization). While text classification capabilities are being used for economic intelligence and financial data extraction, per Salinas.

He says his own experience as a CTO and software engineer working on NLP projects at a number of tech companies led him to spot an opportunity in the challenge of AI implementation.

“I realized that it was quite easy to build acceptable NLP models thanks to great open-source frameworks like spaCy and Hugging Face Transformers but then I found it quite hard to use these models in production,” he explains. “It takes programming skills in order to develop an API, strong DevOps skills in order to build a robust and fast infrastructure to serve NLP models (AI models in general consume a lot of resources), and also data science skills of course.

“I tried to look for ready-to-use cloud solutions in order to save weeks of work but I couldn’t find anything satisfactory. My intuition was that such a platform would help tech teams save a lot of time, sometimes months of work for the teams who don’t have strong DevOps profiles.”

“NLP has been around for decades but until recently it took whole teams of data scientists to build acceptable NLP models. For a couple of years, we’ve made amazing progress in terms of accuracy and speed of the NLP models. More and more experts who have been working in the NLP field for decades agree that NLP is becoming a ‘commodity’,” he goes on. “Frameworks like spaCy make it extremely simple for developers to leverage NLP models without having advanced data science knowledge. And Hugging Face’s open-source repository for NLP models is also a great step in this direction.

“But having these models run in production is still hard, and maybe even harder than before as these brand new models are very demanding in terms of resources.”

The models NLPCloud.io offers are picked for performance — where “best” means it has “the best compromise between accuracy and speed”. Salinas also says they are paying mind to context, given NLP can be used for diverse user cases — hence proposing number of models so as to be able to adapt to a given use.

“Initially we started with models dedicated to entities extraction only but most of our first customers also asked for other use cases too, so we started adding other models,” he notes, adding that they will continue to add more models from the two chosen frameworks — “in order to cover more use cases, and more languages”.

SpaCy and Hugging Face, meanwhile, were chosen to be the source for the models offered via its API based on their track record as companies, the NLP libraries they offer and their focus on production-ready framework — with the combination allowing NLPCloud.io to offer a selection of models that are fast and accurate, working within the bounds of respective trade-offs, according to Salinas.

“SpaCy is developed by a solid company in Germany called Explosion.ai. This library has become one of the most used NLP libraries among companies who want to leverage NLP in production ‘for real’ (as opposed to academic research only). The reason is that it is very fast, has great accuracy in most scenarios, and is an opinionated” framework which makes it very simple to use by non-data scientists (the tradeoff is that it gives less customization possibilities),” he says.

Hugging Face is an even more solid company that recently raised $40M for a good reason: They created a disruptive NLP library called ‘transformers’ that improves a lot the accuracy of NLP models (the tradeoff is that it is very resource intensive though). It gives the opportunity to cover more use cases like sentiment analysis, classification, summarization… In addition to that, they created an open-source repository where it is easy to select the best model you need for your use case.”

While AI is advancing at a clip within certain tracks — such as NLP for English — there are still caveats and potential pitfalls attached to automating language processing and analysis, with the risk of getting stuff wrong or worse. AI models trained on human-generated data have, for example, been shown reflecting embedded biases and prejudices of the people who produced the underlying data.

Salinas agrees NLP can sometimes face “concerning bias issues”, such as racism and misogyny. But he expresses confidence in the models they’ve selected.

“Most of the time it seems [bias in NLP] is due to the underlying data used to trained the models. It shows we should be more careful about the origin of this data,” he says. “In my opinion the best solution in order to mitigate this is that the community of NLP users should actively report something inappropriate when using a specific model so that this model can be paused and fixed.”

“Even if we doubt that such a bias exists in the models we’re proposing, we do encourage our users to report such problems to us so we can take measures,” he adds.

 

Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

By Jonathan Shieber

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project off the coast of Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.

❌