Team management software company Monday.com dropped a new IPO filing today. The latest document — an F-1/A, because the company is based in Israel — provides what could be Monday.com’s final pre-IPO pricing notes and details planned investments from both Zoom and Salesforce after its public offering closes.
The Exchange explores startups, markets and money.
Monday.com’s price range of $125 to $140 per share values it north of $6 billion at the top end of its target interval, a steep upgrade from its final private price recorded in mid-2019.
Let’s quickly unpack its IPO valuation range, discuss the private placements that Zoom and Salesforce plan, and parse what Monday.com’s IPO news means for the broader public offering window.
Because the company is expected to price tomorrow and trade Thursday, we’re looking at data that could prove final, unless Monday.com manages to push its IPO price range higher or prices above its current estimates. Given the sheer number of IPOs that are either filed or rapidly forthcoming, Monday.com could prove to be a bellwether for the larger unicorn software exit market. Therefore, its debut matters to more than itself, its employees and its venture backers.
There are a few ways to value a company as it goes public. The first is its so-called simple valuation. To calculate a simple price for a debuting entity, we simply multiply the two extremes of its IPO price range by the number of shares it will have outstanding after its debut. That works out as follows in the case of Monday.com:
Last year, Seattle-based network security startup ExtraHop was riding high, quickly approaching $100 million in ARR and even making noises about a possible IPO in 2021. But there will be no IPO, at least for now, as the company announced this morning it has been acquired by a pair of private equity firms for $900 million.
The firms, Bain Capital Private Equity and Crosspoint Capital Partners, are buying a security solution that provides controls across a hybrid environment, something that could be useful as more companies find themselves in a position where they have some assets on-site and some in the cloud.
The company is part of the narrower Network Detection and Response (NDR) market. According to Jesse Rothstein, ExtraHop’s chief technology officer and co-founder, it’s a technology that is suited to today’s threat landscape, “I will say that ExtraHop’s north star has always really remained the same, and that has been around extracting intelligence from all of the network traffic in the wire data. This is where I think the network detection and response space is particularly well-suited to protecting against advanced threats,” he told TechCrunch.
The company uses analytics and machine learning to figure out if there are threats and where they are coming from, regardless of how customers are deploying infrastructure. Rothstein said he envisions a world where environments have become more distributed with less defined perimeters and more porous networks.
“So the ability to have this high quality detection and response capability utilizing next generation machine learning technology and behavioral analytics is so very important,” he said.
Max de Groen, managing partner at Bain, says his company was attracted to the NDR space, and saw ExtraHop as a key player. “As we looked at the NDR market, ExtraHop, which […] has spent 14 years building the product, really stood out as the best individual technology in the space,” de Groen told us.
Security remains a frothy market with lots of growth potential. We continue to see a mix of startups and established platform players jockeying for position, and private equity firms often try to establish a package of services. Last week, Symphony Technology Group bought FireEye’s product group for $1.2 billion, just a couple of months after snagging McAfee’s enterprise business for $4 billion as it tries to cobble together a comprehensive enterprise security solution.
Ransomware attacks on the JBS beef plant, and the Colonial Pipeline before it, have sparked a now familiar set of reactions. There are promises of retaliation against the groups responsible, the prospect of company executives being brought in front of Congress in the coming months, and even a proposed executive order on cybersecurity that could take months to fully implement.
But once again, amid this flurry of activity, we must ask or answer a fundamental question about the state of our cybersecurity defense: Why does this keep happening?
I have a theory on why. In software development, there is a concept called “technical debt.” It describes the costs companies pay when they choose to build software the easy (or fast) way instead of the right way, cobbling together temporary solutions to satisfy a short-term need. Over time, as teams struggle to maintain a patchwork of poorly architectured applications, tech debt accrues in the form of lost productivity or poor customer experience.
Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates.
Our nation’s cybersecurity defenses are laboring under the burden of a similar debt. Only the scale is far greater, the stakes are higher and the interest is compounding. The true cost of this “cybersecurity debt” is difficult to quantify. Though we still do not know the exact cause of either attack, we do know beef prices will be significantly impacted and gas prices jumped 8 cents on news of the Colonial Pipeline attack, costing consumers and businesses billions. The damage done to public trust is incalculable.
How did we get here? The public and private sectors are spending more than $4 trillion a year in the digital arms race that is our modern economy. The goal of these investments is speed and innovation. But in pursuit of these ambitions, organizations of all sizes have assembled complex, uncoordinated systems — running thousands of applications across multiple private and public clouds, drawing on data from hundreds of locations and devices.
Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt.
We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken.
First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.
There is another way: Open, hybrid cloud architectures can connect and standardize security across any kind of infrastructure, from private data centers to public clouds, to the edges of the network. This unifies the security workflow and increases the visibility of threats across the entire network (including the third- and fourth-party networks where data flows) and orchestrates the response. It essentially eliminates weak links without having to move data or applications — a design point that should be embraced across the public and private sectors.
The second step is to close the remaining loopholes in the data security supply chain. President Biden’s executive order requires federal agencies to encrypt data that is being stored or transmitted. We have an opportunity to take that a step further and also address data that is in use. As more organizations outsource the storage and processing of their data to cloud providers, expecting real-time data analytics in return, this represents an area of vulnerability.
Many believe this vulnerability is simply the price we pay for outsourcing digital infrastructure to another company. But this is not true. Cloud providers can, and do, protect their customers’ data with the same ferocity as they protect their own. They do not need access to the data they store on their servers. Ever.
To ensure this requires confidential computing, which encrypts data at rest, in transit and in process. Confidential computing makes it technically impossible for anyone without the encryption key to access the data, not even your cloud provider. At IBM, for example, our customers run workloads in the IBM Cloud with full privacy and control. They are the only ones that hold the key. We could not access their data even if compelled by a court order or ransom request. It is simply not an option.
Paying down the principal on any kind of debt can be daunting, as anyone with a mortgage or student loan can attest. But this is not a low-interest loan. As the JBS and Colonial Pipeline attacks clearly demonstrate, the cost of not addressing our cybersecurity debt spans far beyond monetary damages. Our food and fuel supplies are at risk, and entire economies can be disrupted.
I believe that with the right measures — strong public and private collaboration — we have an opportunity to construct a future that brings forward the combined power of security and technological advancement built on trust.
Software as a service has been thriving as a sector for years, but it has gone into overdrive in the past year as businesses responded to the pandemic by speeding up the migration of important functions to the cloud. We’ve all seen the news of SaaS startups raising large funding rounds, with deal sizes and valuations steadily climbing. But as tech industry watchers know only too well, large funding rounds and valuations are not foolproof indicators of sustainable growth and longevity.
To scale sustainably, grow its customer base and mature to the point of an exit, a SaaS startup needs to stand apart from the herd at every phase of development. Failure to do so means a poor outcome for founders and investors.
As a founder who pivoted from on-premise to SaaS back in 2016, I have focused on scaling my company (most recently crossing 145,000 customers) and in the process, learned quite a bit about making a mark. Here is some advice on differentiation at the various stages in the life of a SaaS startup.
Differentiation is crucial early on, because it’s one of the only ways to attract customers. Customers can help lay the groundwork for everything from your product roadmap to pricing.
The more you know about your target customers’ pain points with current solutions, the easier it will be to stand out. Take every opportunity to learn about the people you are aiming to serve, and which problems they want to solve the most. Analyst reports about specific sectors may be useful, but there is no better source of information than the people who, hopefully, will pay to use your solution.
The key to success in the SaaS space is solving real problems. Take DocuSign, for example — the company found a way to simply and elegantly solve a niche problem for users with its software. This is something that sounds easy, but in reality, it means spending hours listening to the customer and tailoring your product accordingly.
At Google I/O today Google Cloud announced Vertex AI, a new managed machine learning platform that is meant to make it easier for developers to deploy and maintain their AI models. It’s a bit of an odd announcement at I/O, which tends to focus on mobile and web developers and doesn’t traditionally feature a lot of Google Cloud news, but the fact that Google decided to announce Vertex today goes to show how important it thinks this new service is for a wide range of developers.
The launch of Vertex is the result of quite a bit of introspection by the Google Cloud team. “Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”
Wiley, who was also the general manager of AWS’s SageMaker AI service from 2016 to 2018 before coming to Google in 2019, noted that Google and others who were able to make machine learning work for themselves saw how it can have a transformational impact, but he also noted that the way the big clouds started offering these services was by launching dozens of services, “many of which were dead ends,” according to him (including some of Google’s own). “Ultimately, our goal with Vertex is to reduce the time to ROI for these enterprises, to make sure that they can not just build a model but get real value from the models they’re building.”
Vertex then is meant to be a very flexible platform that allows developers and data scientist across skill levels to quickly train models. Google says it takes about 80% fewer lines of code to train a model versus some of its competitors, for example, and then help them manage the entire lifecycle of these models.
The service is also integrated with Vizier, Google’s AI optimizer that can automatically tune hyperparameters in machine learning models. This greatly reduces the time it takes to tune a model and allows engineers to run more experiments and do so faster.
Vertex also offers a “Feature Store” that helps its users serve, share and reuse the machine learning features and Vertex Experiments to help them accelerate the deployment of their models into producing with faster model selection.
Deployment is backed by a continuous monitoring service and Vertex Pipelines, a rebrand of Google Cloud’s AI Platform Pipelines that helps teams manage the workflows involved in preparing and analyzing data for the models, train them, evaluate them and deploy them to production.
To give a wide variety of developers the right entry points, the service provides three interfaces: a drag-and-drop tool, notebooks for advanced users and — and this may be a bit of a surprise — BigQuery ML, Google’s tool for using standard SQL queries to create and execute machine learning models in its BigQuery data warehouse.
“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” said Andrew Moore, vice president and general manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”
Amount, a company that provides technology to banks and financial institutions, has raised $99 million in a Series D funding round at a valuation of just over $1 billion.
WestCap, a growth equity firm founded by ex-Airbnb and Blackstone CFO Laurence Tosi, led the round. Hanaco Ventures, Goldman Sachs, Invus Opportunities and Barclays Principal Investments also participated.
Notably, the investment comes just over five months after Amount raised $86 million in a Series C round led by Goldman Sachs Growth at a valuation of $686 million. (The original raise was $81 million, but Barclays Principal Investments invested $5 million as part of a second close of the Series C round). And that round came just three months after the Chicago-based startup quietly raised $58 million in a Series B round in March. The latest funding brings Amount’s total capital raised to $243 million since it spun off from Avant — an online lender that has raised over $600 million in equity — in January of 2020.
So, what kind of technology does Amount provide?
In simple terms, Amount’s mission is to help financial institutions “go digital in months — not years” and thus, better compete with fintech rivals. The company formed just before the pandemic hit. But as we have all seen, demand for the type of technology Amount has developed has only increased exponentially this year and last.
CEO Adam Hughes says Amount was spun out of Avant to provide enterprise software built specifically for the banking industry. It partners with banks and financial institutions to “rapidly digitize their financial infrastructure and compete in the retail lending and buy now, pay later sectors,” Hughes told TechCrunch.
Specifically, the 400-person company has built what it describes as “battle-tested” retail banking and point-of-sale technology that it claims accelerates digital transformation for financial institutions. The goal is to give those institutions a way to offer “a secure and seamless digital customer and merchant experience” that leverages Amount’s verification and analytics capabilities.
Image Credits: Amount
HSBC, TD Bank, Regions, Banco Popular and Avant (of course) are among the 10 banks that use Amount’s technology in an effort to simplify their transition to digital financial services. Recently, Barclays US Consumer Bank became one of the first major banks to offer installment point-of-sale options, giving merchants the ability to “white label” POS payments under their own brand (using Amount’s technology).
“The pandemic dramatically accelerated banks’ interest in further digitizing the retail lending experience and offering additional buy now, pay later financing options with the rise of e-commerce,” Hughes, former president and COO at Avant, told TechCrunch. “Banks are facing significant disruption risk from fintech competitors, so an Amount partnership can deliver a world-class digital experience with significant go-to-market advantages.”
Also, he points out, consumers’ digital expectations have changed as a result of the forced digital adoption during the pandemic, with bank branches and stores closing and more banking done and more goods and services being purchased online.
Amount delivers retail banking experiences via a variety of channels and a point-of-sale financing product suite, as well as features such as fraud prevention, verification, decisioning engines and account management.
Overall, Amount clients include financial institutions collectively managing nearly $2 trillion in U.S. assets and servicing more than 50 million U.S. customers, according to the company.
Hughes declined to provide any details regarding the company’s financials, saying only that Amount “performed well” as a standalone company in 2020 and that the company is expecting “significant” year-over-year revenue growth in 2021.
Amount plans to use its new capital to further accelerate R&D by investing in its technology and products. It also will be eyeing some acquisitions.
“We see a lot of interesting technology we could layer onto our platform to unlock new asset classes, and acquisition opportunities that would allow us to bring additional features to our platform,” Hughes told TechCrunch.
Avant itself made its first acquisition earlier this year when it picked up Zero Financial, news that TechCrunch covered here.
Kevin Marcus, partner at WestCap, said his firm invested in Amount based on the belief that banks and other financial institutions have “a point-in-time opportunity to democratize access to traditional financial products by accelerating modernization efforts.”
“Amount is the market leader in powering that change,” he said. “Through its best-in-class products, Amount enables financial institutions to enhance and elevate the banking experience for their end customers and maintain a key competitive advantage in the marketplace.”
For Bill Staples, the freshly appointed CEO at New Relic, who takes over on July 1, yesterday was a good day. After more than 20 years in the industry, he was given his own company to run. It’s quite an accomplishment, but now the hard work begins.
On the positive side of the equation, New Relic is one of the market leaders in the application performance monitoring space.
Lew Cirne, New Relic’s founder and CEO, who is stepping into the executive chairman role, spent the last several years rebuilding the company’s platform and changing its revenue model, aiming for what he hopes is long-term success.
“All the work we did in re-platforming our data tier and our user interface and the migration to consumption business model, that’s not so we can be a $1 billion New Relic — it’s so we can be a multibillion-dollar New Relic. And we are willing to forgo some short-term opportunity and take some short-term pain in order to set us up for long-term success,” Cirne told TechCrunch after yesterday’s announcement.
On the positive side of the equation, New Relic is one of the market leaders in the application performance monitoring space. Gartner has the company in third place behind Dynatrace and Cisco AppDynamics, and ahead of DataDog. While the Magic Quadrant might not be gospel, it does give you a sense of the relative market positions of each company in a given space.
New Relic competes in the application performance monitoring business, or APM for short. APM enables companies to keep tabs on the health of their applications. That allows them to cut off problems before they happen, or at least figure out why something is broken more quickly. In a world where users can grow frustrated quickly, APM is an important part of the customer experience infrastructure. If your application isn’t working well, customers won’t be happy with the experience and quickly find a rival service to use.
In addition to yesterday’s CEO announcement, New Relic reported earnings. TechCrunch decided to dig into the company’s financials to see just what challenges Staples may face as he moves into the corner office. The resulting picture is one that shows a company doing hard work for a more future-aligned product map and business model, albeit one that may not generate the sort of near-term growth that gives Staples ample breathing room with public investors.
Making long-term bets on a company’s product and business model future can be difficult for Wall Street to swallow in the near term. But such work can garner an incredibly lucrative result; Adobe is a good example of a company that went from license sales to subscription incomes. There are others in the midst of similar transitions, and they often take growth penalties as older revenues are recycled in favor of a new top line.
So when we observe New Relic’s recent result and guidance for the rest of the year, we’re more looking for future signs of life than quick gains.
Starting with the basics, New Relic had a better-than-anticipated quarter. An analysis showed the company’s profit and adjusted profit per share both beat expectations. And the company announced $173 million in total revenue, around $6 million more than the market expected.
So, did its shares rise? Yes, but just 5%, leaving them far under their 52-week high. Why such a modest bump after so strong a report? The company’s guidance, we reckon. Per New Relic, it expects its current quarter to bring 6% to 7% growth compared to the year-ago period. And it anticipates roughly 6% growth for its current fiscal year (its fiscal 2022, which will conclude at the end of calendar Q1 2022).
Cloud Run, Google Cloud’s serverless platform for containerized applications, is getting committed use discounts. Users who commit to spending a given amount on using Cloud Run for a year will get a 17% discount on the money they commit. The company offers a similar pre-commitment discount scheme for VM-based Compute Engine instances, as well as automatic “sustained use” discounts for machines that run for more than 25% of a month.
In addition, Google Cloud is introducing a number of new security features for Cloud Run, including the ability to mount secrets from the Google Cloud Secret Manager and binary authorization to help define and enforce policies about how containers are deployed on the service. Cloud Run users can also now use and manage their own encryption keys (by default, Cloud Run uses Google-managed keys) and a new Recommendation Hub inside of Cloud Run will now offer users recommendations for how to better protect their Cloud Run services.
Aparna Sinha, who recently became the director of product management for Google Cloud’s serverless platform, noted that these updates are part of Google Cloud’s push to build what she calls the “next generation of serverless.”
“We’re really excited to introduce our new vision for serverless, which I think is going to help redefine this space,” she told me. “In the past, serverless has meant a certain narrower type of compute, which is focused on functions or a very specific kind of applications, web services, etc. — and what we are talking about with redefining serverless is focusing on the power of serverless, which is the developer experience and the ease of use, but broadening it into a much more versatile platform, where many different types of applications can be run, and building in the Google way of doing DevOps and security and a lot of integrations so that you have access to everything that’s the best of cloud.”
She noted that Cloud Run saw “tremendous adoption” during the pandemic, something she attributes to the fact that businesses were looking to speed up time-to-value from their applications. Ikea, for example, which famously had a hard time moving from in-store to online sales, bet on Google Cloud’s serverless platform to bring down the refresh time of its online store and inventory management system from three hours to less than three minutes after switching to this model.
“That’s kind of the power of serverless, I think, especially looking forward, the ability to build real-time applications that have data about the context, about the inventory, about the customer and can therefore be much more reactive and responsive,” Sinha said. “This is an expectation that customers will have going forward and serverless is an excellent way to deliver that as well as be responsive to demand patterns, especially when they’re changing so much in today’s uncertain environment.”
Since the container model gives businesses a lot of flexibility in what they want to run in these containers — and how they want to develop these applications since Cloud Run is language-agnostic — Google is now seeing a lot of other enterprises move to this platform as well, both for deploying completely new applications but also to modernize some of their existing services.
For the companies that have predictable usage patterns, the committed use discounts should be an attractive option and it’s likely the more sophisticated organizations that are asking for the kinds of new security features that Google Cloud is introducing today.
“The next generation of serverless combines the best of serverless with containers to run a broad spectrum of apps, with no language, networking or regional restrictions,” Sinha writes in today’s announcement. “The next generation of serverless will help developers build the modern applications of tomorrow—applications that adapt easily to change, scale as needed, respond to the needs of their customers faster and more efficiently, all while giving developers the best developer experience.”
Conventional wisdom over the last year has suggested that the pandemic has driven companies to the cloud much faster than they ever would have gone without that forcing event with some suggesting it has compressed years of transformation into months. This quarter’s cloud infrastructure revenue numbers appear to be proving that thesis correct.
With The Big Three — Amazon, Microsoft and Google — all reporting this week, the market generated almost $40 billion in revenue, according to Synergy Research data. That’s up $2 billion from last quarter and up 37% over the same period last year. Canalys’s numbers were slightly higher at $42 billion.
As you might expect if you follow this market, AWS led the way with $13.5 billion for the quarter up 32% year over year. That’s a run rate of $54 billion. While that is an eye-popping number, what’s really remarkable is the yearly revenue growth, especially for a company the size and maturity of Amazon. The law of large numbers would suggest this isn’t sustainable, but the pie keeps growing and Amazon continues to take a substantial chunk.
Overall AWS held steady with 32% market share. While the revenue numbers keep going up, Amazon’s market share has remained firm for years at around this number. It’s the other companies down market that are gaining share over time, most notably Microsoft which is now at around 20% share good for about $7.8 billion this quarter.
Google continues to show signs of promise under Thomas Kurian, hitting $3.5 billion good for 9% as it makes a steady march towards double digits. Even IBM had a positive quarter, led by Red Hat and cloud revenue good for 5% or about $2 billion overall.
Image Credits: Synergy Research
John Dinsdale, chief analyst at Synergy says that even though AWS and Microsoft have firm control of the market, that doesn’t mean there isn’t money to be made by the companies playing behind them.
“These two don’t have to spend too much time looking in their rearview mirrors and worrying about the competition. However, that is not to say that there aren’t some excellent opportunities for other players. Taking Amazon and Microsoft out of the picture, the remaining market is generating over $18 billion in quarterly revenues and growing at over 30% per year. Cloud providers that focus on specific regions, services or user groups can target several years of strong growth,” Dinsdale said in a statement.
Canalys, another firm that watches the same market as Synergy had similar findings with slight variations, certainly close enough to confirm one another’s findings. They have AWS with 32%, Microsoft 19%, and Google with 7%.
Image Credits: Canalys
Canalys analyst Blake Murray says that there is still plenty of room for growth, and we will likely continue to see big numbers in this market for several years. “Though 2020 saw large-scale cloud infrastructure spending, most enterprise workloads have not yet transitioned to the cloud. Migration and cloud spend will continue as customer confidence rises during 2021. Large projects that were postponed last year will resurface, while new use cases will expand the addressable market,” he said.
The numbers we see are hardly a surprise anymore, and as companies push more workloads into the cloud, the numbers will continue to impress. The only question now is if Microsoft can continue to close the market share gap with Amazon.
Weav, which is building a universal API for commerce platforms, is emerging from stealth today with $4.3 million in funding from a bevy of investors, and a partnership with Brex.
Founded last year by engineers Ambika Acharya, Avikam Agur and Nadav Lidor after participating in the W20 YC batch, Weav joins the wave of fintech infrastructure companies that aim to give fintechs and financial institutions a boost. Specifically, Weav’s embedded technology is designed to give these organizations access to “real time, user-permissioned” commerce data that they can use to create new financial products for small businesses.
Its products allow its customers to connect to multiple platforms with a single API that was developed specifically for the commerce platforms that businesses use to sell products and accept payments. Weav operates under the premise that allowing companies to build and embed new financial products creates new opportunities for e-commerce merchants, creators and other entrepreneurs.
Left to right: Co-founders Ambika Acharya, Nadav Lidor and Avikam Agur; Image courtesy of Weav
In a short amount of time, Weav has seen impressive traction. Recently, Brex launched Instant Payouts for Shopify sellers using the Weav API. It supports platform integrations such as Stripe, Square, Shopify and PayPal. (More on that later.) Since its API went live in January, “thousands” of businesses have used new products and services built on Weav’s infrastructure, according to Lidor. Its API call volume is growing 300% month over month, he said.
And, the startup has attracted the attention of a number of big-name investors, including institutions and the founders of prominent fintech companies. Foundation Capital led its $4.3 million seed round, which also included participation from Y Combinator, Abstract Ventures, Box Group, LocalGlobe, Operator Partners, Commerce Ventures and SV Angel.
A slew of founders and executives also put money in the round, including Brex founders Henrique Dubugras and Pedro Franceschi; Ramp founder Karim Atiyeh; Digits founders Jeff Seibert and Wayne Chang; Hatch founder Thomson Nguyen; GoCardless founder Matt Robinson and COO Carlos Gonzalez-Cadenas; Vouch founder Sam Hodges; Plaid’s Charley Ma as well as executives from fintechs such as Square, Modern Treasury and Pagaya.
Foundation Capital’s Angus Davis said his firm has been investing in fintech infrastructure for over a decade. And personally, before he became a VC, Davis was the founder and CEO of Upserve, a commerce software company. There, he says, he witnessed firsthand “the value of transactional data to enable new types of lending products.”
Foundation has a thesis around the type of embedded fintech that Weav has developed, according to Davis. And it sees a large market opportunity for a new class of financial applications to come to market built atop Weav’s platform.
“We were excited by Weav’s vision of a universal API for commerce platforms,” Davis wrote via email. “Much like Plaid and Envestnet brought universal APIs to banking for consumers, Weav enables a new class of B2B fintech applications for businesses.”
Weav says that by using its API, companies can prompt their business customers to “securely” connect their accounts with selling platforms, online marketplaces, subscription management systems and payment gateways. Once authenticated, Weav aggregates and standardizes sales, inventory and other account data across platforms and develops insights to power new products across a range of use cases, including lending and underwriting; financial planning and analysis; real-time financial services and business management tools.
For the last few years, there’s been a rise of API companies, as well as openness in the financial system that’s largely been focused on consumers, Lidor points out.
“For example, Plaid brings up very rich data about consumers, but when you think about businesses, oftentimes that data is still locked up in all kinds of systems,” he told TechCrunch. “We’re here to provide some of the building blocks and the access to data from everything that has to do with sales and revenue. And, we’re really excited about powering products that are meant to make the lives of small businesses and e-commerce, sellers and creators much easier and be able to get them access to financial products.”
In the case of Brex, Weav’s API allows the startup to essentially offer instant access to funds that otherwise would take a few days or a few weeks for businesses to access.
“Small businesses need access as quickly as possible to their revenue so that they can fund their operations,” Lidor said.
Brex co-CEO Henrique Dubugras said that Weav’s API gives the company the ability to offer real-time funding to more customers selling on more platforms, which saved the company “thousands of engineering hours” and accelerated its rollout timeline by months.
Clearly, the company liked what it saw, considering that its founders personally invested in Weav. Is Weav building the “Plaid for commerce”? Guess only time will tell.
Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.
Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.
Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.
Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?
Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.
Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?
The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.
Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.
A few key challenges are sparking the increased number of security alerts in the public cloud:
The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.
Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.
In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.
Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.
Without a unified framework in place, the volume of incidents will spiral out of control.
CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.
But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.
Here’s a closer look at the benefits a standardized approach can create for all parties:
Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.
CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.
The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.
Google today announced a sizable update to its Anthos multicloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.
Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.
Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the “Google Cloud Services Platform,” which launched three years ago). Hybrid and multicloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. Recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.
Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.
He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call “an anchor in the cloud” to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example.
The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.
Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.
I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.
After an upward revision, UiPath priced its IPO last night at $56 per share, a few dollars above its raised target range. The above-range price meant that the unicorn put more capital into its books through its public offering.
For a company in a market as competitive as robotic process automation (RPA), the funds are welcome. In fact, RPA has been top of mind for startups and established companies alike over the last year or so. In that time frame, enterprise stalwarts like SAP, Microsoft, IBM and ServiceNow have been buying smaller RPA startups and building their own, all in an effort to muscle into an increasingly lucrative market.
In June 2019, Gartner reported that RPA was the fastest-growing area in enterprise software, and while the growth has slowed down since, the sector is still attracting attention. UIPath, which Gartner found was the market leader, has been riding that wave, and today’s capital influx should help the company maintain its market position.
It’s worth noting that when the company had its last private funding round in February, it brought home $750 million at an impressive valuation of $35 billion. But as TechCrunch noted over the course of its pivot to the public markets, that round valued the company above its final IPO price. As a result, this week’s $56-per-share public offer wound up being something of a modest down-round IPO to UiPath’s final private valuation.
Then, a broader set of public traders got hold of its stock and bid its shares higher. The former unicorn’s shares closed their first day’s trading at precisely $69, above the per-share price at which the company closed its final private round.
So despite a somewhat circuitous route, UiPath closed its first day as a public company worth more than it was in its Series F round — when it sold 12,043,202 shares sold at $62.27576 apiece, per SEC filings. More simply, UiPath closed today worth more per-share than it was in February.
How you might value the company, whether you prefer a simple or fully-diluted share count, is somewhat immaterial at this juncture. UiPath had a good day.
While it’s hard to know what the company might do with the proceeds, chances are it will continue to try to expand its platform beyond pure RPA, which could become market-limited over time as companies look at other, more modern approaches to automation. By adding additional automation capabilities — organically or via acquisitions — the company can begin covering broader parts of its market.
TechCrunch spoke with UiPath CFO Ashim Gupta today, curious about the company’s choice of a traditional IPO, its general avoidance of adjusted metrics in its SEC filings, and the IPO market’s current temperature. The final question was on our minds, as some companies have pulled their public listings in the wake of a market described as “challenging”.
If you only stayed up to date with the Coinbase direct listing this week, you’re forgiven. It was, after all, one heck of a flotation.
But underneath the cryptocurrency exchange’s public debut, other IPO news that matters did happen this week. And the news adds up to a somewhat muddled picture of the current IPO market.
To cap off the week, let’s run through IPO news from UiPath, Coinbase, Grab, AppLovin and Zenvia. The aggregate dataset should help you form your own perspective about where today’s IPO markets really are in terms of warmth for the often-unprofitable unicorns of the world.
Recall that we’re in the midst of a slightly more turbulent IPO window than we saw during the last quarter. After seemingly watching every company’s IPO price above-range and then charge higher on opening day, several companies pulled their offerings as the second quarter started. It was a surprise.
Since then we’ve seen Compass go public, but not at quite the level of performance it might have anticipated, and, then, this week, much has happened.
What follows is a mini-digest of IPO news from the week, tagged with our best read of just how bullish (or not) the happening really was:
When Dell announced it was spinning out VMware yesterday, the move itself wasn’t surprising; there had been public speculation for some time. But Dell could have gone a number of ways in this deal, despite its choice to spin VMware out as a separate company with a constituent dividend instead of an outright sale.
The dividend route, which involves a payment to shareholders between $11.5 billion and $12 billion, has the advantage of being tax-free (or at least that’s what Dell hopes as it petitions the IRS). For Dell, which owns 81% of VMware, the dividend translates to somewhere between $9.3 billion and $9.7 billion in cash, which the company plans to use to pay down a portion of the huge debt it still holds from its $58 billion EMC purchase in 2016.
Dell hopes to have its cake and eat it too with this deal: It generates a large slug of cash to use for personal debt relief while securing a five-year commercial deal that should keep the two companies closely aligned.
VMware was the crown jewel in that transaction, giving Dell an inroad to the cloud it had lacked prior to the deal. For context, VMware popularized the notion of the virtual machine, a concept that led to the development of cloud computing as we know it today. It has since expanded much more broadly beyond that, giving Dell a solid foothold in cloud native computing.
Dell hopes to have its cake and eat it too with this deal: It generates a large slug of cash to use for personal debt relief while securing a five-year commercial deal that should keep the two companies closely aligned. Dell CEO Michael Dell will remain chairman of the VMware board, which should help smooth the post-spinout relationship.
But could Dell have extracted more cash out of the deal?
Patrick Moorhead, principal analyst at Moor Insights and Strategies, says that beyond the cash transaction, the deal provides a way for the companies to continue working closely together with the least amount of disruption.
“In the end, this move is more about maximizing the Dell and VMware stock price [in a way that] doesn’t impact customers, ISVs or the channel. Wall Street wasn’t valuing the two companies together nearly as [strongly] as I believe it will as separate entities,” Moorhead said.
More than half a decade ago, my Battery Ventures partner Neeraj Agrawal penned a widely read post offering advice for enterprise-software companies hoping to reach $100 million in annual recurring revenue.
His playbook, dubbed “T2D3” — for “triple, triple, double, double, double,” referring to the stages at which a software company’s revenue should multiply — helped many high-growth startups index their growth. It also highlighted the broader explosion in industry value creation stemming from the transition of on-premise software to the cloud.
Fast forward to today, and many of T2D3’s insights are still relevant. But now it’s time to update T2D3 to account for some of the tectonic changes shaping a broader universe of B2B tech — and pushing companies to grow at rates we’ve never seen before.
One of the biggest factors driving billion-dollar B2Bs is a simple but important shift in how organizations buy enterprise technology today.
I call this new paradigm “billion-dollar B2B.” It refers to the forces shaping a new class of cloud-first, enterprise-tech behemoths with the potential to reach $1 billion in ARR — and achieve market capitalizations in excess of $50 billion or even $100 billion.
In the past several years, we’ve seen a pioneering group of B2B standouts — Twilio, Shopify, Atlassian, Okta, Coupa*, MongoDB and Zscaler, for example — approach or exceed the $1 billion revenue mark and see their market capitalizations surge 10 times or more from their IPOs to the present day (as of March 31), according to CapIQ data.
More recently, iconic companies like data giant Snowflake and video-conferencing mainstay Zoom came out of the IPO gate at even higher valuations. Zoom, with 2020 revenue of just under $883 million, is now worth close to $100 billion, per CapIQ data.
Image Credits: Battery Ventures via FactSet. Note that market data is current as of April 3, 2021.
In the wings are other B2B super-unicorns like Databricks* and UiPath, which have each raised private financing rounds at valuations of more than $20 billion, per public reports, which is unprecedented in the software industry.
A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.
A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.
The company fixed the data spill, but has not yet alerted its customers.
Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.
TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.
The data set also included the personal data and order details of company executives.
It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.
Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.
It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.
In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.
“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.
Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
1Password, the password management service that competes with the likes of LastPass and BitWarden, today announced a major push beyond the basics of password management and into the infrastructure secrets management space. To do so, the company has acquired secrets management service SecretHub and is now launching its new 1Password Secrets Automation service.
1Password did not disclose the price of the acquisition. According to CrunchBase, Netherlands-based SecretHub never raised any institutional funding ahead of today’s announcement.
For companies like 1Password, moving into the enterprise space, where managing corporate credentials, API tokens, keys and certificates for individual users and their increasingly complex infrastructure services, seems like a natural move. And with the combination of 1Password and its new Secrets Automation service, businesses can use a single tool that covers them from managing their employee’s passwords to handling infrastructure secrets. 1Password is currently in use by more then 80,000 businesses worldwide and a lot of these are surely potential users of its Secrets Automation service, too.
“Companies need to protect their infrastructure secrets as much if not more than their employees’ passwords,” said Jeff Shiner, CEO of 1Password. “With 1Password and Secrets Automation, there is a single source of truth to secure, manage and orchestrate all of your business secrets. We are the first company to bring both human and machine secrets together in a significant and easy-to-use way.”
In addition to the acquisition and new service, 1Password also today announced a new partnership with GitHub. “We’re partnering with 1Password because their cross-platform solution will make life easier for developers and security teams alike,” said Dana Lawson, VP of partner engineering and development at GitHub, the largest and most advanced development platform in the world. “With the upcoming GitHub and 1Password Secrets Automation integration, teams will be able to fully automate all of their infrastructure secrets, with full peace of mind that they are safe and secure.”
China is pushing forward an internet society where economic and public activities increasingly take place online. In the process, troves of citizen and government data get transferred to cloud servers, raising concerns over information security. One startup called ThreatBook sees an opportunity in this revolution and pledges to protect corporations and bureaucracies against malicious cyberattacks.
Antivirus and security software has been around in China for several decades, but until recently, enterprises were procuring them simply to meet compliance requests, Xue Feng, founder and CEO of six-year-old ThreatBook, told TechCrunch in an interview.
Starting around 2014, internet accessibility began to expand rapidly in China, ushering in an explosion of data. Information previously stored in physical servers was moving to the cloud. Companies realized that a cyber attack could result in a substantial financial loss and started to pay serious attention to security solutions.
In the meantime, cyberspace is emerging as a battlefield where competition between states plays out. Malicious actors may target a country’s critical digital infrastructure or steal key research from a university database.
“The amount of cyberattacks between countries is reflective of their geopolitical relationships,” observed Xue, who oversaw information security at Amazon China before founding ThreatBook. Previously, he was the director of internet security at Microsoft in China.
“If two countries are allies, they are less likely to attack one another. China has a very special position in geopolitics. Besides its tensions with the other superpowers, cyberattacks from smaller, nearby countries are also common.”
Like other emerging SaaS companies, ThreatBook sells software and charges a subscription fee for annual services. More than 80% of its current customers are big corporations in finance, energy, the internet industry, and manufacturing. Government contracts make up a smaller slice. With its Series E funding round that closed 500 million yuan ($76 million) in March, ThreatBook boosted its total capital raised to over 1 billion yuan from investors including Hillhouse Capital.
Xue declined to disclose the company’s revenues or valuation but said 95% of the firm’s customers have chosen to renew their annual subscriptions. He added that the company has met the “preliminary requirements” of the Shanghai Exchange’s STAR board, China’s equivalent to NASDAQ, and will go public when the conditions are ripe.
“It takes our peers 7-10 years to go public,” said Xue.
ThreatBook compares itself to CrowdStrike from Silicon Valley, which filed to go public in 2019 and detect threats by monitoring a company’s “endpoints”, which could be an employee’s laptops and mobile devices that connect to the internal network from outside the corporate firewall.
ThreatBook similarly has a suite of software that goes onto the devices of a company’s employees, automatically detects threats and comes up with a list of solutions.
“It’s like installing a lot of security cameras inside a company,” said Xue. “But the thing that matters is what we tell customers after we capture issues.”
SaaS providers in China are still in the phase of educating the market and lobbying enterprises to pay. Of the 3,000 companies that ThreatBook serves, only 300 are paying so there is plentiful room for monetization. Willingness to spend also differs across sectors, with financial institutions happy to shell out several million yuan ($1 = 6.54 yuan) a year while a tech startup may only want to pay a fraction of that.
Xue’s vision is to take ThreatBook global. The company had plans to expand overseas last year but was held back by the COVID-19 pandemic.
“We’ve had a handful of inquiries from companies in Southeast Asia and the Middle East. There may even be room for us in markets with mature [cybersecurity companies] like Europe and North America,” said Xue. “As long as we are able to offer differentiation, a customer may still consider us even if it has an existing security solution.”
Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.
The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).
Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.
“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”
Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”
“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”