When Dell acquired EMC in 2016 for $67 billion, it created a complicated consortium of interconnected organizations. Some, like VMware and Pivotal, operate as completely separate companies. They have their own boards of directors, can acquire companies and are publicly traded on the stock market. Yet they work closely within the Dell, partnering where it makes sense. When Pivotal’s stock price plunged recently, VMware saved the day when it bought the faltering company for $2.7 billion yesterday.
Pivotal went public last year, and sometimes struggled, but in June the wheels started to come off after a poor quarterly earnings report. The company had what MarketWatch aptly called “a train wreck of a quarter.”
How bad was it? So bad that its stock price was down 42% the day after it reported its earnings. While the quarter itself wasn’t so bad, with revenue up year over year, the guidance was another story. The company cut its 2020 revenue guidance by $40-$50 million and the guidance it gave for the upcoming 2Q19 was also considerably lower than consensus Wall Street estimates.
The stock price plunged from a high of $21.44 on May 30th to a low of $8.30 on Aug 14th. The company’s market cap plunged in that same time period falling from $5.828 billion on May 30th to $2.257 billion on Aug 14th. That’s when VMware admitted it was thinking about buying the struggling company.
Some eight months after it was reported that Ping Identity’s owners Vista Equity had hired bankers to explore a public listing, today Ping Identity took the plunge: the Colorado-based online ID management company has filed an S-1 form indicating that it plans to raise up to $100 million in an IPO on the Nasdaq exchange under the ticker “Ping.”
While the initial S-1 filing doesn’t have an indication of price range, Ping is said to be looking at a valuation of between $2 billion and $3 billion in this listing.
The company has been around since 2001, founded by Andre Durand (who is still the CEO), and it was acquired by Vista in 2016 for about $600 million — at a time when a clutch of enterprise companies that looked like strong IPO candidates were going the private equity route and staying private instead.
But more recently, there has been a surge in demand for better IT security linked to identity and authentication management, so it seems that Vista Equity is selling up. The PE firm is taking advantage of the fact that the market’s currently very strong for tech IPOs, but there is so much M&A in enterprise right now (just yesterday VMware acquired not one but two companies, Carbon Black for $2.1 billion and Pivotal for $2.7 billion) that I can’t help but wonder if something might move here too.
The S-1 reveals a number of details on the company’s financials, indicating that it’s currently unprofitable but on a steady growth curve. Ping had revenues of $112.9 million in the first six months of 2019, versus $99.5 million in the same period a year before. Its loss has been shrinking in recent years, with a net loss of $3.1 million in the first six months of this year versus $5.8 million a year before (notably in 2017 overall it was profitable with a net income of $19 million. It seems that the change is due to acquisitions and investing for growth).
Its annual run rate, meanwhile, was $198 million for the first six months of the year, compared to $159.6 million in the same period a year ago.
The area of identity and access management has become a cornerstone of enterprise IT, with companies looking for efficient and secure ways to centralise how not just their employees, but their customers, their partners and various connected devices on their networks can be authenticated across their cloud and on-premise applications.
The demand for secure solutions covering all the different aspects of a company’s IT stack has grown rapidly over recent years, spurred not just by an increased move to centralised applications served through the cloud, but also by the drastic rise in breaches where malicious hackers have exploited vulnerabilities and loopholes in companies’ sign-on screens.
Ping has been one of the bigger companies building services in this area and tackling all of those use cases, competing with the likes of Okta, OneLogin, AuthO, Cisco and dozens more off-the-shelf and custom-built solutions.
The company offers its services on an SaaS basis, covering services like secure sign-on, multi-factor authentication, API access security, personalised and unified profile directories, data governance and AI-based security policies. It claims to be the pioneer of “Intelligent Identity,” using AI to help its system analyse user, device and network behavior to better identify potentially malicious activity.
More to come.
Three years after closing a $9.3 billion deal to acquire Netsuite, several Oracle board members have written an extraordinary letter to the Delaware Court, approving a shareholder lawsuit against company executives Larry Ellison and Safra Catz over the 2016 deal. Reuters broke this story.
According Reuters’ Alison Frankel, three board members including former U.S. Defense Secretary Leon Panetta, sent a letter on August 15th to Sam Glasscock III, Vice Chancellory for the Court of the Chancellor in Georgetown, Delaware, approving the suit as members of a special Board of Directors entity known as the Special Litigation Committee.
The lawsuit is what is called in legal parlance, a derivative suit. According to the site Justia, this type of suit is filed in cases like this. “Since shareholders are generally allowed to file a lawsuit in the event that a corporation has refused to file one on its own behalf, many derivative suits are brought against a particular officer or director of the corporation for breach of contract or breach of fiduciary duty,” the Justia site explained.
The letter went onto say there was an attempt to settle this suit, which was originally launched in 2017, through negotiation outside of court, but when that attempt failed, the directors wrote this letter to the court stating that the suit should be allowed to proceed.
As Frankel wrote in her article, the lawsuit, which was originally filed by Firemen’s fund could be worth billions:
One of the lead lawyers for the Firemen’s fund, Joel Friedlander of Friedlander & Gorris, said at a hearing in June that shareholders believe the breach-of-duty claims against Oracle and NetSuite executives are worth billions of dollars. So in last week’s letter, Oracle’s board effectively unleashed plaintiffs’ lawyers to seek ten-figure damages against its own members, Frankel wrote
It’s worth pointing out, as we reported at the time of the Netsuite acquisition, that Larry Ellison was involved in setting up Netsuite in the late 1990s and was a major shareholder at the time of the deal.
Oracle was struggling to find its cloud footing in 2016, and it was believed that by buying an established SaaS player like Netsuite, it could begin to build out its cloud business much faster than trying to develop something like it internally. A June Synergy Research SaaS marketshare report, while admitting the market was fragmented, still showed Oracle was far behind the pack in spite of that deal three years ago.
We reached out to Oracle regarding this story, but it declined to comment.
Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.
It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.
“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”
The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.
In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.
We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.
Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.
“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”
Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.
Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.
Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.
“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.
Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.
“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”
DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).
“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”
Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.
“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”
You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.
The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.
“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”
How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.
One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.
“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”
Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.
Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”
Remediant, a startup that helps companies secure privileged access in a modern context, announced a $15 million Series A today led by Dell Technologies Capital and ForgePoint Capital.
Remediant’s co-founders, Paul Lanzi and Tim Keeler, worked in biotech for years and saw a problem first-hand with the way companies secured privileged access. It was granted to certain individuals in the organization carte blanche, and they believed if you could limit access, it would make the space more secure and less vulnerable to hackers.
Lanzi says they started the company with two core concepts. “The first concept is the ability to assess or detect all of the places where privileged accounts exist and what systems they have access to. The second concept is to strip away all of the privileged access from all of those accounts and grant it back on a just-in-time basis,” Lanzi explained.
If you’re thinking that could get in the way of people who need access to do their jobs, as former IT admins, they considered that. Remediant is based a Zero Trust model where you have to prove you have the right to access the privileged area. But they do provide a reasonable baseline amount of time for users who need it within the confines of continuously enforcing access.
“Continuous enforcement is part of what we do, so by default we grant you four hours of access when you need that access, and then after that four hours, even if you forget to come back and end your session, we will automatically revoke that access. In that way all of the systems that are protected by SecureOne (the company’s flagship product) are held in this Zero Trust state where no one has access to them on a day-to-day basis,” Lanzi said.
Remediant SecureONE Dashboard. Screenshot: Remediant
The company has bootstrapped until now, and has actually been profitable, something that’s unusual for a startup at this stage of development, but Lanzi says they decided to take an investment in order to shift gears and concentrate on growth and product expansion.
Deepak Jeevankumar, managing director at investor Dell Technologies Capital says it’s not easy for security startups to rise above the noise, but he saw something in Remediant’s founders. “Tim, and Paul came from the practitioners viewpoint. They knew the actual problems that people face in terms of privileged access. So they had a very strong empathy towards the customer’s problem because they lived through it,” Jeevankumar told TechCrunch.
He added that the privileged access market hasn’t really been updated in two decades. “It’s a market ripe for disruption. They are combining the just-in-time philosophy with the Zero Trust philosophy, and are bringing that to the crown jewel of administrative access,” he said.
The company’s tools are installed on the customer’s infrastructure, either on-prem or in the cloud. They don’t have a pure cloud product at the moment, but they have plans for a SaaS version down the road to help small and medium sized businesses solve the privileged access problem.
Lanzi says they are also looking to expand the product line in other ways with this investment. “The basic philosophies that underpin our technology are broadly applicable. We want to start applying our technology in those other areas as well. So as we think toward a future that looks more like cloud and more like DevOps, we want to be able to add more of those features to our products,” he said.
NASA and Hewlett Packard Enterprise (HPE) have teamed up to build a new supercomputer, which will serve NASA’s Ames Research Center in California and develop models and simulations of the landing process for Artemis Moon missions.
The new supercomputer is called “Aitken,” named after American astronomer Robert Grant Aitken, and it can run simulations at up to 3.69 petaFLOPs of theoretical performance power. Aitken is custom-designed by HPE and NASA to work with the Ames modular data center, which is a project it undertook starting in 2017 to massively reduce the amount of water and energy used in cooling its supercomputing hardware.
Aitken employs second-generation Intel Xeon processors, Mellanox InfiniBand high-speed networking, and has 221 TB of memory on board for storage. It’s the result of four years of collaboration between NASA and HPE, and it will model different methods of entry, descent and landing for Moon-destined Artemis spacecraft, running simulations to determine possible outcomes and help determine the best, safest approach.
This isn’t the only collaboration between HPE and NASA: The enterprise computer maker built for the agency a new kind of supercomputer able to withstand the rigors of space, and sent it up to the ISS in 2017 for preparatory testing ahead of potential use on longer missions, including Mars. The two partners then opened that supercomputer for use in third-party experiments last year.
HPE also announced earlier this year that it was buying supercomputer company Cray for $1.3 billion. Cray is another long-time partner of NASA’s supercomputing efforts, dating back to the space agency’s establishment of a dedicated computational modeling division and the establishing of its Central Computing Facility at Ames Research Center.
Splunk, the publicly traded data processing and analytics company, today announced that it has acquired SignalFx for a total price of about $1.05 billion. Approximately 60% of this will be in cash and 40% in Splunk common stock. The companies expect the acquisition to close in the second half of 2020.
SignalFx, which emerged from stealth in 2015, provides real-time cloud monitoring solutions, predictive analytics and more. Upon close, Splunk argues, this acquisition will allow it to become a leader “in observability and APM for organizations at every stage of their cloud journey, from cloud-native apps to homegrown on-premises applications.”
Indeed, the acquisition will likely make Splunk a far stronger player in the cloud space as it expands its support for cloud-native applications and the modern infrastructures and architectures those rely on.
Ahead of the acquisition, SignalFx had raised a total of $178.5 million, according to Crunchbase, including a recent Series E round. Investors include General Catalyst, Tiger Global Management, Andreessen Horowitz and CRV. Its customers include the likes of AthenaHealth, Change.org, Kayak, NBCUniversal and Yelp.
“Data fuels the modern business, and the acquisition of SignalFx squarely puts Splunk in position as a leader in monitoring and observability at massive scale,” said Doug Merritt, president and CEO, Splunk, in today’s announcement. “SignalFx will support our continued commitment to giving customers one platform that can monitor the entire enterprise application lifecycle. We are also incredibly impressed by the SignalFx team and leadership, whose expertise and professionalism are a strong addition to the Splunk family.”
In these days where endorsements from influential personalities online can make or break a product, a startup that’s built a business to help companies harness all the long-tail firepower they can muster to get their name out there in a good way has raised some funding to expand deeper into feedback and other experience territory. Reputation.com, which works with big enterprises in areas like automotive and healthcare to help improve their visibility online and provide more accurate reports to the businesses about how their brands are perceived by customers and others, has raised $30 million in equity financing, money that CEO Joe Fuca said the company will use to continue to expand its tech platform to source more feedback and to future-proof it for further global expansion.
The funding — led by Ascension Ventures, with participation also from new backers Akkadian Ventures, Industry Ventures and River City Ventures and returning investors Kleiner Perkins, August Capital, Bessemer Venture Partners, Heritage Group and Icon Ventures — is the second round Reputation.com has raised since its pivot away from services aimed at individuals. Fuca said the company’s valuation is tripling with this round, and while he wouldn’t go into the details from what I understand from sources (which is supported by data in PitchBook), it had been around $120-130 million in its last round, making it now valued at between $360-390 million now.
Part of the reason that the company’s valuation has tripled is because of its growth. The company doesn’t disclose many customer names (for possibly obvious reasons) but said that three of the top five automotive OEMs and as well as over 10,000 auto dealerships in the U.S. use it, with those numbers now also growing in Europe. Among healthcare providers, it now has 250 customers — including three of the top five — and in the world of property management, more than 100 companies are using Reputation.com. Other verticals that use the company include financial services, hospitality and retail services.
The company competes with other firms that provide services like SEO and other online profile profile management and sees the big challenge as trying to convince businesses that there is more to having a strong profile than just an NPS score (providers of which are also competitors). So, in addition to the metrics that are usually used to compile this figure (based on customer feedback surveys typically), Reputation.com uses unstructured data as well (for example sentiment analysis from social media) and applies algorithms to this to calculate a Reputation Score.
Reputation.com has been around actually since 2006, with its original concept being managing individuals’ online reputations — not exactly in the Klout or PR-management sense, but with a (now very prescient-sounding) intention of providing a way for people to better control their personal information online. Its original name was ReputationDefender and founded by Michael Fertik, it was a pioneer in what came to be called personal information management.
The company proposed an idea of a “vault” for your information, which could still be used and appropriated by so-called data brokers (which help feed the wider ad-tech and marketing tech machines that underpin a large part of the internet economy), but would be done with user consent and compensation.
The idea was hard to scale, however. “I think it was an addressable market issue,” said Fuca, who took over as CEO last year the company was reorienting itself to enterprise services (it sold off the consumer/individual business at the same time to a PE firm), with Fertik taking the role of executive chairman, among other projects. “Individuals seeking reputation defending is only certain market size.”
Not so in the world of enterprise, the area the startup (and I think you can call Reputation.com a startup, given its pivot and restructure and venture backing) has been focusing on exclusively for the better part of a year.
The company today integrates closely with Google — which is not only a major platform for disseminating information in the form of SEO management, but a data source as a repository of user reviews — but despite the fact that Google holds so many cards in the stack, Fuca (who had previously been an exec at DocuSign before coming to Reputation.com) said he doesn’t see it as a potential threat or competitor.
A recent survey from the company about reputation management for the automotive sector underscores just how big of a role Google does play:
“We don’t worry about google as competitor,” Fuca said. “It is super attracted to working with partners like us because we drive domain activity, and they love it when people like us explain to customers how to optimise on Google. For Google, it’s almsot like we are an optimization partner and so it helps their entire ecosystem, and so I don’t see them being a competitor or wanting to be.”
Nevertheless, the fact that the bulk of Reputation.com’s data sources are essentially secondary — that is publically available information that is already online and collected by others — will be driving some of the company’s next stage of development. The plan is to start to add in more of its own primary-source data gathering in the form of customer surveys and feedback forms. That will open the door too to more questions of how the company will handle privacy and personal data longer term.
“Ascension Ventures is excited to deepen its partnership with Reputation.com as it enters its next critical stage of growth,” said John Kuelper, Managing Director at Ascension Ventures, in a statement. “We’ve watched Reputation.com’s industry leading reputation management offering grow into an even more expansive CX platform. We’re seeing some of the world’s largest brands and service providers achieve terrific results by partnering with Reputation.com to analyze and take action on customer feedback — wherever it originates — at scale and in real-time. We’re excited to make this additional investment in Reputation.com as it continues to grow and expand its market leadership.”
Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).
Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems)
It’s made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry’s big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.
Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years with $112 million in venture capital funding from Benchmark and others.
First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.
One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.
Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.
Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million. (Via Cerebras Systems)
The first challenge the team ran into according to Feldman was handling communication across the “scribe lines.” While Cerebras chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion plus transistors.
The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.
Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole wafer silicon chip viable.
Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve that expected by re-approaching them using modern tools.
He likens the challenge though to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’”
And indeed, the toughest challenges according to Feldman for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.
The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate lest cracks develop between the two.
Feldman said that “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”
Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: absolutely nothing on the market is designed to handle a whole-wafer chip.
Cerebras designed its own testing and packaging system to handle its chip (Via Cerebras Systems)
“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.
Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.
It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.
And so, those were the next three challenges — thermal expansion, packaging, and power/cooling — that the company has worked around-the-clock to deliver these past few years.
Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers according to reports. The big challenge though as with all new chips is scaling production to meet customer demand.
For Cerebras, the situation is a bit unusual. Since it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.
Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.
Popular enterprise news and research site The New Stack is coming to TechCrunch Sessions: Enterprise on September 5 for a special Pancake & Podcast session with live Q&A, featuring, you guessed it, delicious pancakes and awesome panelists!
Here’s the “short stack” of what’s going to happen:
You can only take part in this fun pancake-breakfast podcast if you register for a ticket to TC Sessions: Enterprise. Use the code TNS30 to get 30% off the conference registration price!
Here’s the longer version of what’s going to happen:
At 8:15 a.m., The New Stack founder and publisher Alex Williams takes the stage as the moderator and host of the panel discussion. Our topic: “The People and Technology You Need to Build a Modern Enterprise.” We’ll start with intros of our panelists and then dive into the topic with Sid Sijbrandij, founder and CEO at GitLab, and Frederic Lardinois, enterprise reporter and editor at TechCrunch, as our initial panelists. More panelists to come!
Then it’s time for questions. Questions we could see getting asked (hint, hint): Who’s on your team? What makes a great technical team for the enterprise startup? What are the observations a journalist has about how the enterprise is changing? What about when the time comes for AI? Who will I need on my team?
And just before 9 a.m., we’ll pick a ticket out of the hat and announce our raffle winner. It’s the perfect way to start the day.
On a side note, the pancake breakfast discussion will be published as a podcast on The New Stack Analysts.
But there’s only one way to get a prize and network with fellow attendees, and that’s by registering for TC Sessions: Enterprise and joining us for a short stack with The New Stack. Tickets are now $349, but you can save 30% with code TNS30.
OKRs, or Objectives and Key Results, are a popular planning method in Silicon Valley. Like most of those methods that make you fill in some form once every quarter, I’m pretty sure employees find them rather annoying and a waste of their time. Ally wants to change that and make the process more useful. The company today announced that it has raised an $8 million Series A round led by Access Partners, with participation from Vulcan Capital, Founders Co-op and Lee Fixel. The company, which launched in 2018, previously raised a $3 million seed round.
Ally founder and CEO Vetri Vellore tells me that he learned his management lessons and the value of OKR at his last startup, Chronus. After years of managing large teams at enterprises like Microsoft, he found himself challenged to manage a small team at a startup. “I went and looked for new models of running a business execution. And OKRs were one of those things I stumbled upon. And it worked phenomenally well for us,” Vellore said. That’s where the idea of Ally was born, which Vellore pursued after selling his last startup.
Most companies that adopt this methodology, though, tend to work with spreadsheets and Google Docs. Over time, that simply doesn’t work, especially as companies get larger. Ally, then, is meant to replace these other tools. The service is currently in use at “hundreds” of companies in more than 70 countries, Vellore tells me.
One of its early adopters was Remitly . “We began by using shared documents to align around OKRs at Remitly. When it came time to roll out OKRs to everyone in the company, Ally was by far the best tool we evaluated. OKRs deployed using Ally have helped our teams align around the right goals and have ultimately driven growth,” said Josh Hug, COO of Remitly.
Vellore tells me that he has seen teams go from annual or bi-annual OKRs to more frequently updated goals, too, which is something that’s easier to do when you have a more accessible tool for it. Nobody wants to use yet another tool, though, so Ally features deep integrations into Slack, with other integrations in the works (something Ally will use this new funding for).
Since adopting OKRs isn’t always easy for companies that previously used other methodologies (or nothing at all), Ally also offers training and consulting services with online and on-site coaching.
Pricing for Ally starts at $7 per month per user for a basic plan, but the company also offers a flat $29 per month plan for teams with up to 10 users, as well as an enterprise plan, which includes some more advanced features and single sign-on integrations.
Microsoft announced this morning that it was acquiring jClarity, an open-source tool designed to tune the performance of Java applications. It will be doing that on Azure from now on. In addition, the company has been offering a flavor of Java called AdoptOpenJDK, which they bill as a free alternative to Oracle Java. The companies did not discuss the terms of the deal.
As Microsoft pointed out in a blog post announcing the acquisition, they are seeing increasing use of large-scale Java installations on Azure, both internally with platforms like Minecraft and externally with large customers, including Daimler and Adobe.
The company believes that by adding the jClarity team and its toolset, it can help service these Java customers better. “The team, formed by Java champions and data scientists with proven expertise in data driven Java Virtual Machine (JVM) optimizations, will help teams at Microsoft to leverage advancements in the Java platform,” the company wrote in the blog.
Microsoft has actually been part of the AdoptOpenJDK project, along with a Who’s Who of other enterprise companies, including Amazon, IBM, Pivotal, Red Hat and SAP.
Co-founder and CEO Martijn Verburg, writing in a company blog post announcing the deal, unsurprisingly spoke in glowing terms about the company he was about to become a part of. “Microsoft leads the world in backing developers and their communities, and after speaking to their engineering and programme leadership, it was a no brainer to enter formal discussions. With the passion and deep expertise of Microsoft’s people, we’ll be able to support the Java ecosystem better than ever before,” he wrote.
Verburg also took the time to thank the employees, customers and community that have supported the open-source project on top of which his company was built. Verburg’s new title at Microsoft will be Principal Engineering Group Manager (Java) at Microsoft.
It is unclear how the community will react to another flavor of Java being absorbed by another large vendor, or how the other big vendors involved in the project will feel about it, but regardless, jClarity is part of Microsoft now.
As businesses use an increasing variety of marketing software solutions, the goal around collecting all of that data is to improve customer experience. Simon Data announced a $30 million Series C round today to help.
The round was led by Polaris Partners . Previous investors .406 Ventures and F-Prime Capital also participated. Today’s investment brings the total raised to $59 million, according to the company.
Jason Davis, co-founder and CEO, says his company is trying to pull together a lot of complex data from a variety of sources, while driving actions to improve customer experience. “It’s about taking the data, and then building complex triggers that target the right customer at the right time,” Davis told TechCrunch. He added, “This can be in the context of any sort of customer transaction, or any sort of interaction with the business.”
Companies tend to use a variety of marketing tools, and Simon Data takes on the job of understanding the data and activities going on in each one. Then based on certain actions — such as, say, an abandoned shopping cart — it delivers a consistent message to the customer, regardless of the source of the data that triggered the action.
They see this ability to pull together data as a customer data platform (CDP). In fact, part of its job is to aggregate data and use it as the basis of other activities. In this case, it involves activating actions you define based on what you know about the customer at any given moment in the process.
As the company collects this data, it also sees an opportunity to use machine learning to create more automated and complex types of interactions. “There are a tremendous number of super complex problems we have to solve. Those include core platform or infrastructure, and we also have a tremendous opportunity in front of us on the predictive and data science side as well,” Davis said. He said that is one of the areas where they will put today’s money to work.
The company, which launched in 2014, is based in NYC. The company currently has 87 employees in total, and that number is expected to grow with today’s announcement. Customers include Equinox, Venmo and WeWork. The company’s most recent funding round was a $20 million in July 2018.
Being the CTO for one of the three major hypercloud providers may seem like enough of a job for most people, but Mark Russinovich, the CTO of Microsoft Azure, has a few other talents in his back pocket. Russinovich, who will join us for a fireside chat at our TechCrunch Sessions: Enterprise event in San Francisco on September 5 (p.s. early-bird sale ends Friday), is also an accomplished novelist who has published four novels, all of which center around tech and cybersecurity.
At our event, though, we won’t focus on his literary accomplishments (except for maybe his books about Windows Server) as much as on the trends he’s seeing in enterprise cloud adoption. Microsoft, maybe more so than its competitors, always made enterprise customers and their needs the focus of its cloud initiatives from the outset. Today, as the majority of enterprises is looking to move at least some of their legacy workloads into the cloud, they are often stumped by the sheer complexity of that undertaking.
In our fireside chat, we’ll talk about what Microsoft is doing to reduce this complexity and how enterprises can maximize their current investments into the cloud, both for running new cloud-native applications and for bringing legacy applications into the future. We’ll also talk about new technologies that can make the move to the cloud more attractive to enterprises, including the current buzz around edge computing, IoT, AI and more.
Before joining Microsoft, Russinovich, who has a Ph.D. in computer engineering from Carnegie Mellon, was the co-founder and chief architect of Winternals Software, which Microsoft acquired in 2006. During his time at Winternals, Russinovich discovered the infamous Sony rootkit. Over his 13 years at Microsoft, he moved from Technical Fellow up to the CTO position for Azure, which continues to grow at a rapid clip as it looks to challenge AWS’s leadership in total cloud revenue.
If you’re an early-stage startup, we only have three demo table packages left! Each demo package comes with four tickets and a great location for your company to get in front of attendees. Book your demo package today before we sell out!
The company, which made its debut on TechCrunch’s Battlefield stage back in 2010, has put a placeholder value of the offering at $100 million, but it will likely be worth billions when it finally trades on the market.
Cloudflare is one of a clutch of businesses whose job it is to make web sites run better, faster and with little to no downtime.
Recently the company has been at the center of political debates around some of the customers and company it keeps, including social media networks like 8Chan and racist media companies like the Daily Stormer.
Indeed, the company went so far as to cite 8Chan as a risk factor in its public offering documents.
As far as money goes, Cloudflare is — like other early-stage technology companies — losing money. But it’s not losing that much money, and its growth is impressive.
As the company notes in its filing with the Securities and Exchange Commission:
We have experienced significant growth, with our revenue increasing from $84.8 million in 2016 to $134.9 million in 2017 and to $192.7 million in 2018, increases of 59% and 43%, respectively. As we continue to invest in our business, we have incurred net losses of $17.3 million, $10.7 million, and $87.2 million for 2016, 2017, and 2018, respectively. For the six months ended June 30, 2018 and 2019, our revenue increased from $87.1 million to $129.2 million, an increase of 48%, and we incurred net losses of $32.5 million and $36.8 million, respectively.
Cloudflare sits at the intersection of government policy and private company operations and its potential risk factors include a discussion about what that could mean for its business.
The company isn’t the first network infrastructure service provider to hit the market. That distinction belongs to Fastly, whose shares likely have not performed as well as investors would have liked.
Cloudflare has raised roughly $332 million to date from investors, including Franklin Templeton Investments, Fidelity, Union Square Ventures, New Enterprise Associates, Pelion Venture Partners and Venrock. Business Insider reported that the company’s last investment gave Cloudflare a valuation of $3.2 billion.
The company will trade on the New York Stock Exchange under the ticker symbol “NET.” Underwriters on the company’s public offering include Goldman Sachs, Morgan Stanley, JP Morgan, Jefferies, Wells Fargo Securities and RBC Capital Markets.
If you have ever worked at any sizable company, the word “IT” probably doesn’t conjure up many warm feelings. If you’re working for an old, traditional enterprise company, you probably don’t expect anything else, though. If you’re working for a modern tech company, though, chances are your expectations are a bit higher. And once you’re at the scale of a company like Facebook, a lot of the third-party services that work for smaller companies simply don’t work anymore.
To discuss how Facebook thinks about its IT strategy and why it now builds most of its IT tools in-house, I sat down with the company’s CIO, Atish Banerjea, at its Menlo Park headquarter.
Before joining Facebook in 2016 to head up what it now calls its “Enterprise Engineering” organization, Banerjea was the CIO or CTO at companies like NBCUniversal, Dex One and Pearson.
“If you think about Facebook 10 years ago, we were very much a traditional IT shop at that point,” he told me. “We were responsible for just core IT services, responsible for compliance and responsible for change management. But basically, if you think about the trajectory of the company, were probably about 2,000 employees around the end of 2010. But at the end of last year, we were close to 37,000 employees.”
Traditionally, IT organizations rely on third-party tools and software, but as Facebook grew to this current size, many third-party solutions simply weren’t able to scale with it. At that point, the team decided to take matters into its own hands and go from being a traditional IT organization to one that could build tools in-house. Today, the company is pretty much self-sufficient when it comes to running its IT operations, but getting to this point took a while.
“We had to pretty much reinvent ourselves into a true engineering product organization and went to a full ‘build’ mindset,” said Banerjea. That’s not something every organization is obviously able to do, but, as Banerjea joked, one of the reasons why this works at Facebook “is because we can — we have that benefit of the talent pool that is here at Facebook.”
The company then took this talent and basically replicated the kind of team it would help on the customer side to build out its IT tools, with engineers, designers, product managers, content strategies, people and research. “We also made the decision at that point that we will hold the same bar and we will hold the same standards so that the products we create internally will be as world-class as the products we’re rolling out externally.”
One of the tools that wasn’t up to Facebook’s scaling challenges was video conferencing. The company was using a third-party tool for that, but that just wasn’t working anymore. In 2018, Facebook was consuming about 20 million conference minutes per month. In 2019, the company is now at 40 million per month.
Besides the obvious scaling challenge, Facebook is also doing this to be able to offer its employees custom software that fits their workflows. It’s one thing to adapt existing third-party tools, after all, and another to build custom tools to support a company’s business processes.
Banerjea told me that creating this new structure was a relatively easy sell inside the company. Every transformation comes with its own challenges, though. For Facebook’s Enterprise Engineering team, that included having to recruit new skill sets into the organization. The first few months of this process were painful, Banerjea admitted, as the company had to up-level the skills of many existing employees and shed a significant number of contractors. “There are certain areas where we really felt that we had to have Facebook DNA in order to make sure that we were actually building things the right way,” he explained.
Facebook’s structure creates an additional challenge for the team. When you’re joining Facebook as a new employee, you have plenty of teams to choose from, after all, and if you have the choice of working on Instagram or WhatsApp or the core Facebook app — all of which touch millions of people — working on internal tools with fewer than 40,000 users doesn’t sound all that exciting.
“When young kids who come straight from college and they come into Facebook, they don’t know any better. So they think this is how the world is,” Banerjea said. “But when we have experienced people come in who have worked at other companies, the first thing I hear is ‘oh my goodness, we’ve never seen internal tools of this caliber before.’ The way we recruit, the way we do performance management, the way we do learning and development — every facet of how that employee works has been touched in terms of their life cycle here.”
Facebook first started building these internal tools around 2012, though it wasn’t until Banerjea joined in 2016 that it rebranded the organization and set up today’s structure. He also noted that some of those original tools were good, but not up to the caliber employees would expect from the company.
“The really big change that we went through was up-leveling our building skills to really become at the same caliber as if we were to build those products for an external customer. We want to have the same experience for people internally.”
The company went as far as replacing and rebuilding the commercial Enterprise Resource Planning (ERP) system it had been using for years. If there’s one thing that big companies rely on, it’s their ERP systems, given they often handle everything from finance and HR to supply chain management and manufacturing. That’s basically what all of their backend tools rely on (and what companies like SAP, Oracle and others charge a lot of money for). “In that 2016/2017 time frame, we realized that that was not a very good strategy,” Banerjea said. In Facebook’s case, the old ERP handled the inventory management for its data centers, among many other things. When that old system went down, the company couldn’t ship parts to its data centers.
“So what we started doing was we started peeling off all the business logic from our backend ERP and we started rewriting it ourselves on our own platform,” he explained. “Today, for our ERP, the backend is just the database, but all the business logic, all of the functionality is actually all custom written by us on our own platform. So we’ve completely rewritten our ERP, so to speak.”
In practice, all of this means that ideally, Facebook’s employees face far less friction when they join the company, for example, or when they need to replace a broken laptop, get a new phone to test features or simply order a new screen for their desk.
One classic use case is onboarding, where new employees get their company laptop, mobile phones and access to all of their systems, for example. At Facebook, that’s also the start of a six-week bootcamp that gets new engineers up to speed with how things work at Facebook. Back in 2016, when new classes tended to still have less than 200 new employees, that was still mostly a manual task. Today, with far more incoming employees, the Enterprise Engineering team has automated most of that — and that includes managing the supply chain that ensures the laptops and phones for these new employees are actually available.
But the team also built the backend that powers the company’s more traditional IT help desks, where employees can walk up and get their issues fixed (and passwords reset).
To talk more about how Facebook handles the logistics of that, I sat down with Koshambi Shah, who heads up the company’s Enterprise Supply Chain organization, which pretty much handles every piece of hardware and software the company delivers and deploys to its employees around the world (and that global nature of the company brings its own challenges and additional complexity). The team, which has fewer than 30 people, is made up of employees with experience in manufacturing, retail and consumer supply chains.
Typically, enterprises offer their employees a minimal set of choices when it comes to the laptops and phones they issue to their employees, and the operating systems that can run on them tend to be limited. Facebook’s engineers have to be able to test new features on a wide range of devices and operating systems. There are, after all, still users on the iPhone 4s or BlackBerry that the company wants to support. To do this, Shah’s organization actually makes thousands of SKUs available to employees and is able to deliver 98% of them within three days or less. It’s not just sending a laptop via FedEx, though. “We do the budgeting, the financial planning, the forecasting, the supply/demand balancing,” Shah said. “We do the asset management. We make sure the asset — what is needed, when it’s needed, where it’s needed — is there consistently.”
In many large companies, every asset request is double guessed. Facebook, on the other hand, places a lot of trust in its employees, it seems. There’s a self-service portal, the Enterprise Store, that allows employees to easily request phones, laptops, chargers (which get lost a lot) and other accessories as needed, without having to wait for approval (though if you request a laptop every week, somebody will surely want to have a word with you). Everything is obviously tracked in detail, but the overall experience is closer to shopping at an online retailer than using an enterprise asset management system. The Enterprise Store will tell you where a device is available, for example, so you can pick it up yourself (but you can always have it delivered to your desk, too, because this is, after all, a Silicon Valley company).
For accessories, Facebook also offers self-service vending machines, and employees can walk up to the help desk.
The company also recently introduced an Amazon Locker-style setup that allows employees to check out devices as needed. At these smart lockers, employees simply have to scan their badge, choose a device and, once the appropriate door has opened, pick up the phone, tablet, laptop or VR devices they were looking for and move on. Once they are done with it, they can come back and check the device back in. No questions asked. “We trust that people make the right decision for the good of the company,” Shah said. For laptops and other accessories, the company does show the employee the price of those items, though, so it’s clear how much a certain request costs the company. “We empower you with the data for you to make the best decision for your company.”
Talking about cost, Shah told me the Supply Chain organization tracks a number of metrics. One of those is obviously cost. “We do give back about 4% year-over-year, that’s our commitment back to the businesses in terms of the efficiencies we build for every user we support. So we measure ourselves in terms of cost per supported user. And we give back 4% on an annualized basis in the efficiencies.”
Unsurprisingly, the company has by now gathered enough data about employee requests (Shah said the team fulfills about half a million transactions per year) that it can use machine learning to understand trends and be proactive about replacing devices, for example.
Facebooks’ Enterprise Engineering group doesn’t just support internal customers, though. Another interesting aspect to Facebook’s Enterprise Engineering group is that it also runs the company’s internal and external events, including the likes of F8, the company’s annual developer conference. To do this, the company built out conference rooms that can seat thousands of people, with all of the logistics that go with that.
The company also showed me one of its newest meeting rooms where there are dozens of microphones and speakers hanging from the ceiling that make it easier for everybody in the room to participate in a meeting and be heard by everybody else. That’s part of what the organization’s “New Builds” team is responsible for, and something that’s possible because the company also takes a very hands-on approach to building and managing its offices.
Facebook also runs a number of small studios in its Menlo Park and New York offices, where both employees and the occasional external VIP can host Facebook Live videos.
Indeed, live video, it seems, is one of the cornerstones of how Facebook employees collaborate and help employees who work from home. Typically, you’d just use the camera on your laptop or maybe a webcam connected to your desktop to do so. But because Facebook actually produces its own camera system with the consumer-oriented Portal, Banerjea’s team decided to use that.
“What we have done is we have actually re-engineered the Portal,” he told me. “We have connected with all of our video conferencing systems in the rooms. So if I have a Portal at home, I can dial into my video conferencing platform and have a conference call just like I’m sitting in any other conference room here in Facebook. And all that software, all the engineering on the portal, that has been done by our teams — some in partnership with our production teams, but a lot of it has been done with Enterprise Engineering.”
Unsurprisingly, there are also groups that manage some of the core infrastructure and security for the company’s internal tools and networks. All of those tools run in the same data centers as Facebook’s consumer-facing applications, though they are obviously sandboxed and isolated from them.
It’s one thing to build all of these tools for internal use, but now, the company is also starting to think about how it can bring some of these tools it built for internal use to some of its external customers. You may not think of Facebook as an enterprise company, but with its Workplace collaboration tool, it has an enterprise service that it sells externally, too. Last year, for the first time, Workplace added a new feature that was incubated inside of Enterprise Engineering. That feature was a version of Facebook’s public Safety Check that the Enterprise Engineering team had originally adapted to the company’s own internal use.
“Many of these things that we are building for Facebook, because we are now very close partners with our Workplace team — they are in the enterprise software business and we are the enterprise software group for Facebook — and many [features] we are building for Facebook are of interest to Workplace customers.”
As Workplace hit the market, Banerjea ended up talking to the CIOs of potential users, including the likes of Delta Air Lines, about how Facebook itself used Workplace internally. But as companies started to adopt Workplace, they realized that they needed integrations with existing third-party services like ERP platforms and Salesforce. Those companies then asked Facebook if it could build those integrations or work with partners to make them available. But at the same time, those customers got exposed to some of the tools that Facebook itself was building internally.
“Safety Check was the first one,” Banerjea said. “We are actually working on three more products this year.” He wouldn’t say what these are, of course, but there is clearly a pipeline of tools that Facebook has built for internal use that it is now looking to commercialize. That’s pretty unusual for any IT organization, which, after all, tends to only focus on internal customers. I don’t expect Facebook to pivot to an enterprise software company anytime soon, but initiatives like this are clearly important to the company and, in some ways, to the morale of the team.
This creates a bit of friction, too, though, given that the Enterprise Engineering group’s mission is to build internal tools for Facebook. “We are now figuring out the deployment model,” Banerjea said. Who, for example, is going to support the external tools the team built? Is it the Enterprise Engineering group or the Workplace team?
Chances are then, that Facebook will bring some of the tools it built for internal use to more enterprises in the long run. That definitely puts a different spin on the idea of the consumerization of enterprise tech. Clearly, not every company operates at the scale of Facebook and needs to build its own tools — and even some companies that could benefit from it don’t have the resources to do so. For Facebook, though, that move seems to have paid off and the tools I saw while talking to the team definitely looked more user-friendly than any off-the-shelf enterprise tools I’ve seen at other large companies.
It’s not exactly on par with Amazon, which reported cloud revenue of $8.381 billion last quarter, more than double Alibaba’s yearly run rate, but it’s been a steady rise for the company, which really began taking the cloud seriously as a side business in 2015.
At that time, Alibaba Cloud’s president Simon Hu boasted to Reuters that his company would overtake Amazon in four years. It is not even close to doing that, but it has done well to get to more than a billion a quarter in just four years.
In fact, in its most recent data for the Asia-Pacific region, Synergy Research, a firm that closely tracks the public cloud market, found that Amazon was still number one overall in the region. Alibaba was first in China, but fourth in the region outside of China, with the market’s Big 3 — Amazon, Microsoft and Google — coming in ahead of it. These numbers were based on Q1 data before today’s numbers were known, but they provide a sense of where the market is in the region.
Synergy’s John Dinsdale says the company’s growth has been impressive, outpacing the market growth rate overall. “Alibaba’s share of the worldwide cloud infrastructure services market was 5% in Q2 — up by almost a percentage point from Q2 of last year, which is a big deal in terms of absolute growth, especially in a market that is growing so rapidly,” Dinsdale told TechCrunch.
He added, “The great majority of its revenue does indeed come from China (and Hong Kong), but it is also making inroads in a range of other APAC country markets — Indonesia, Malaysia, Singapore, India, Australia, Japan and South Korea. While numbers are relatively small, it has also got a foothold in EMEA and some operations in the U.S.”
The company was busy last quarter adding more than 300 new products and features in the period ending June 30th (and reported today). That included changes and updates to core cloud offerings, security, data intelligence and AI applications, according to the company.
While the cloud business still isn’t a serious threat to the industry’s Big Three, especially outside its core Asia-Pacific market, it’s still growing steadily and accounted for almost 7% of Alibaba’s total of $16.74 billion in revenue for the quarter — and that’s not bad at all.
Incorta, a startup founded by former Oracle executives who want to change the way we process large amounts data, announced a $30 million Series C today led by Sorenson Capital.
Other investors participating in the round included GV (formerly Google Ventures), Kleiner Perkins, M12 (formerly Microsoft Ventures), Telstra Ventures and Ron Wohl. Today’s investment brings the total raised to $75 million, according to the company.
Incorta CEO and co-founder Osama Elkady says he and his co-founders were compelled to start Inccorta because they saw so many companies spending big bucks for data projects that were doomed to fail. “The reason that drove me and three other guys to leave Oracle and start Incorta is because we found out with all the investment that companies were making around data warehousing and implementing advanced projects, very few of these projects succeeded,” Elkady told TechCrunch.
A typical data project of involves ETL (extract, transform, load). It’s a process that takes data out of one database, changes the data to make it compatible with the target database and adds it to the target database.
It takes time to do all of that, and Incorta is trying to give access to the data much faster by stripping out this step. Elkady says that this allows customers to make use of the data much more quickly, claiming they are reducing the process from one that took hours to one that takes just seconds. That kind of performance enhancement is garnering attention.
Rob Rueckert, managing director for lead investor Sorenson Capital sees a company that’s innovating in a mature space. “Incorta is poised to upend the data warehousing market with innovative technology that will end 30 years of archaic and slow data warehouse infrastructure,” he said in a statement.
The company says revenue is growing by leaps and bounds, reporting 284% year over year growth (although they did not share specific numbers). Customers include Starbucks, Shutterfly and Broadcom.
The startup, which launched in 2013, currently has 250 employees with developers in Egypt and main operations in San Mateo, California. They recently also added offices in Chicago, Dubai and Bangalore.
VMware today confirmed that it is in talks to acquire software development platform Pivotal Software, the service best known for commercializing the open-source Cloud Foundry platform. The proposed transaction would see VMware acquire all outstanding Pivotal Class A stock for $15 per share, a significant markup over Pivotal’s current share price (which unsurprisingly shot up right after the announcement).
Pivotal’s shares have struggled since the company’s IPO in April 2018. The company was originally spun out of EMC Corporation (now DellEMC) and VMware in 2012 to focus on Cloud Foundry, an open-source software development platform that is currently in use by the majority of Fortune 500 companies. A lot of these enterprises are working with Pivotal to support their Cloud Foundry efforts. Dell itself continues to own the majority of VMware and Pivotal, and VMware also owns an interest in Pivotal already and sells Pivotal’s services to its customers, as well. It’s a bit of an ouroboros of a transaction.
Pivotal Cloud Foundry was always the company’s main product, but it also offered additional consulting services on top of that. Despite improving its execution since going public, Pivotal still lost $31.7 million in its last financial quarter as its stock price traded at just over half of the IPO price. Indeed, the $15 per share VMware is offering is identical to Pivotal’s IPO price.
An acquisition by VMware would bring Pivotal’s journey full circle, though this is surely not the journey the Pivotal team expected. VMware is a Cloud Foundry Foundation platinum member, together with Pivotal, DellEMC, IBM, SAP and Suse, so I wouldn’t expect any major changes in VMware’s support of the overall open-source ecosystem behind Pivotal’s core platform.
It remains to be seen whether the acquisition will indeed happen, though. In a press release, VMware acknowledged the discussion between the two companies but noted that “there can be no assurance that any such agreement regarding the potential transaction will occur, and VMware does not intend to communicate further on this matter unless and until a definitive agreement is reached.” That’s the kind of sentence lawyers like to write. I would be quite surprised if this deal didn’t happen, though.
Buying Pivotal would also make sense in the grand scheme of VMware’s recent acquisitions. Earlier this year, the company acquired Bitnami, and last year it acquired Heptio, the startup founded by two of the three co-founders of the Kubernetes project, which now forms the basis of many new enterprise cloud deployments and, most recently, Pivotal Cloud Foundry.
Shout out to all the savvy enterprise software startuppers. Here’s a quick, two-part money-saving reminder. Part one: TC Sessions: Enterprise 2019 is right around the corner on September 5, and you have only two days left to buy an early-bird ticket and save yourself $100. Part two: for every Session ticket you buy, you get one free Expo-only pass to TechCrunch Disrupt SF 2019.
Save money and increase your ROI by completing one simple task: buy your early-bird ticket today.
About 1,000 members of enterprise software’s powerhouse community will join us for a full day dedicated to exploring the current and future state of enterprise software. It’s certainly tech’s 800-pound gorilla — a $500 billion industry. Some of the biggest names and brightest minds will be on hand to discuss critical issues all players face — from early-stage startups to multinational conglomerates.
The day’s agenda features panel discussions, main-stage talks, break-out sessions and speaker Q&As on hot topics including intelligent marketing automation, the cloud, data security, AI and quantum computing, just to name a few. You’ll hear from people like SAP CEO Bill McDermott; Aaron Levie, Box co-founder; Jim Clarke, director of Quantum Hardware at Intel and many, many more.
Customer experience is always a hot topic, so be sure to catch this main-stage panel discussion with Amit Ahuja (Adobe), Julie Larson-Green (Qualtrics) and Peter Reinhardt (Segment):
The Trials and Tribulations of Experience Management: As companies gather more data about their customers and employees, it should theoretically improve their experience, but myriad challenges face companies as they try to pull together information from a variety of vendors across disparate systems, both in the cloud and on prem. How do you pull together a coherent picture of your customers, while respecting their privacy and overcoming the technical challenges?
TC Sessions: Enterprise 2019 takes place in San Francisco on September 5. Take advantage of this two-part money-saving opportunity. Buy your early-bird ticket by August 16 at 11:59 p.m. (PT) to save $100. And score a free Expo-only pass to TechCrunch Disrupt SF 2019 for every ticket you buy. We can’t wait to see you in September!
Interested in sponsoring TC Sessions: Enterprise? Fill out this form and a member of our sales team will contact you.