Today, Amazon Web Services is a mainstay in the cloud infrastructure services market, a $60 billion juggernaut of a business. But in 2008, it was still new, working to keep its head above water and handle growing demand for its cloud servers. In fact, 15 years ago last week, the company launched Amazon EC2 in beta. From that point forward, AWS offered startups unlimited compute power, a primary selling point at the time.
EC2 was one of the first real attempts to sell elastic computing at scale — that is, server resources that would scale up as you needed them and go away when you didn’t. As Jeff Bezos said in an early sales presentation to startups back in 2008, “you want to be prepared for lightning to strike, […] because if you’re not that will really generate a big regret. If lightning strikes, and you weren’t ready for it, that’s kind of hard to live with. At the same time you don’t want to prepare your physical infrastructure, to kind of hubris levels either in case that lightning doesn’t strike. So, [AWS] kind of helps with that tough situation.”
An early test of that value proposition occurred when one of their startup customers, Animoto, scaled from 25,000 to 250,000 users in a 4-day period in 2008 shortly after launching the company’s Facebook app at South by Southwest.
At the time, Animoto was an app aimed at consumers that allowed users to upload photos and turn them into a video with a backing music track. While that product may sound tame today, it was state of the art back in those days, and it used up a fair amount of computing resources to build each video. It was an early representation of not only Web 2.0 user-generated content, but also the marriage of mobile computing with the cloud, something we take for granted today.
For Animoto, launched in 2006, choosing AWS was a risky proposition, but the company found trying to run its own infrastructure was even more of a gamble because of the dynamic nature of the demand for its service. To spin up its own servers would have involved huge capital expenditures. Animoto initially went that route before turning its attention to AWS because it was building prior to attracting initial funding, Brad Jefferson, co-founder and CEO at the company explained.
“We started building our own servers, thinking that we had to prove out the concept with something. And as we started to do that and got more traction from a proof-of-concept perspective and started to let certain people use the product, we took a step back, and were like, well it’s easy to prepare for failure, but what we need to prepare for success,” Jefferson told me.
Going with AWS may seem like an easy decision knowing what we know today, but in 2007 the company was really putting its fate in the hands of a mostly unproven concept.
“It’s pretty interesting just to see how far AWS has gone and EC2 has come, but back then it really was a gamble. I mean we were talking to an e-commerce company [about running our infrastructure]. And they’re trying to convince us that they’re going to have these servers and it’s going to be fully dynamic and so it was pretty [risky]. Now in hindsight, it seems obvious but it was a risk for a company like us to bet on them back then,” Jefferson told me.
Animoto had to not only trust that AWS could do what it claimed, but also had to spend six months rearchitecting its software to run on Amazon’s cloud. But as Jefferson crunched the numbers, the choice made sense. At the time, Animoto’s business model was for free for a 30 second video, $5 for a longer clip, or $30 for a year. As he tried to model the level of resources his company would need to make its model work, it got really difficult, so he and his co-founders decided to bet on AWS and hope it worked when and if a surge of usage arrived.
That test came the following year at South by Southwest when the company launched a Facebook app, which led to a surge in demand, in turn pushing the limits of AWS’s capabilities at the time. A couple of weeks after the startup launched its new app, interest exploded and Amazon was left scrambling to find the appropriate resources to keep Animoto up and running.
Dave Brown, who today is Amazon’s VP of EC2 and was an engineer on the team back in 2008, said that “every [Animoto] video would initiate, utilize and terminate a separate EC2 instance. For the prior month they had been using between 50 and 100 instances [per day]. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3,400 instances as of Friday morning.” Animoto was able to keep up with the surge of demand, and AWS was able to provide the necessary resources to do so. Its usage eventually peaked at 5000 instances before it settled back down, proving in the process that elastic computing could actually work.
At that point though, Jefferson said his company wasn’t merely trusting EC2’s marketing. It was on the phone regularly with AWS executives making sure their service wouldn’t collapse under this increasing demand. “And the biggest thing was, can you get us more servers, we need more servers. To their credit, I don’t know how they did it — if they took away processing power from their own website or others — but they were able to get us where we needed to be. And then we were able to get through that spike and then sort of things naturally calmed down,” he said.
The story of keeping Animoto online became a main selling point for the company, and Amazon was actually the first company to invest in the startup besides friends and family. It raised a total of $30 million along the way, with its last funding coming in 2011. Today, the company is more of a B2B operation, helping marketing departments easily create videos.
While Jefferson didn’t discuss specifics concerning costs, he pointed out that the price of trying to maintain servers that would sit dormant much of the time was not a tenable approach for his company. Cloud computing turned out to be the perfect model and Jefferson says that his company is still an AWS customer to this day.
While the goal of cloud computing has always been to provide as much computing as you need on demand whenever you need it, this particular set of circumstances put that notion to the test in a big way.
Today the idea of having trouble generating 3,400 instances seems quaint, especially when you consider that Amazon processes 60 million instances every day now, but back then it was a huge challenge and helped show startups that the idea of elastic computing was more than theory.
Hello friends, and welcome back to Week in Review.
Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.
In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.
In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.
This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.
Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”
(The announcement also reportedly generated some controversy inside of Apple.)
The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.
A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.
Image Credits: Bryce Durbin / TechCrunch /
Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.
You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.
Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.
** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.
Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.
Here are the TechCrunch news stories that especially caught my eye this week:
LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.
FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.
Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.
TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.
Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.
Some of my favorite reads from our Extra Crunch subscription service this week:
Our favorite startups from YC’s Demo Day, Part 1
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’
All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”
TechnologyOne, an Australian SaaS enterprise, has agreed to acquire UK-based higher education software provider Scientia for £12 million /$16.6 million in cash.
TechnologyOne claims to have 75% of Higher Education institutions in Australia using its software, while Scientia claims 50% market share in the UK.
The acquisition includes an initial payment of £6m and further payments.
Adrian Di Marco, TechnologyOne founder and Executive Chairman said: “This is our company’s first international acquisition and it demonstrates our deep commitment to serving the higher education sector and the UK market. The unique IP and market-leading functionality of Scientia’s product supports our vision of delivering enterprise software that is incredibly easy to use.”
Commenting, Michelle Gillespie, Registrar and Director of Student Administration and Library Services at Swinburne University of Technology said: “The one thing that students care most about is their timetable. Being able to fully integrate a schedule into the full student experience is very important, and an exciting step for those universities – like Swinburne – that use TechnologyOne’s student management system.”
If the past 18 months is any indication, the nature of the workplace is changing. And while Box and Zoom already have integrations together, it makes sense for them to continue to work more closely.
Their newest collaboration is the Box app for Zoom, a new type of in-product integration that allows users to bring apps into a Zoom meeting to provide the full Box experience.
While in Zoom, users can securely and directly access Box to browse, preview and share files from Zoom — even if they are not taking part in an active meeting. This new feature follows a Zoom integration Box launched last year with its “Recommended Apps” section that enables access to Zoom from Box so that workflows aren’t disrupted.
The companies’ chief product officers, Diego Dugatkin with Box and Oded Gal with Zoom, discussed with TechCrunch why seamless partnerships like these are a solution for the changing workplace.
With digitization happening everywhere, an integration of “best-in-breed” products for collaboration is essential, Dugatkin said. Not only that, people don’t want to be moving from app to app, instead wanting to stay in one environment.
“It’s access to content while never having to leave the Zoom platform,” he added.
It’s also access to content and contacts in different situations. When everyone was in an office, meeting at a moment’s notice internally was not a challenge. Now, more people are understanding the value of flexibility, and both Gal and Dugatkin expect that spending some time at home and some time in the office will not change anytime soon.
As a result, across the spectrum of a company, there is an increasing need for allowing and even empowering people to work from anywhere, Dugatkin said. That then leads to a conversation about sharing documents in a secure way for companies, which this collaboration enables.
The new Box and Zoom integration enables meeting in a hybrid workplace: chat, video, audio, computers or mobile devices, and also being able to access content from all of those methods, Gal said.
“Companies need to be dynamic as people make the decision of how they want to work,” he added. “The digital world is providing that flexibility.”
This long-term partnership is just scratching the surface of the continuous improvement the companies have planned, Dugatkin said.
Dugatkin and Gal expect to continue offering seamless integration before, during and after meetings: utilizing Box’s cloud storage, while also offering the ability for offline communication between people so that they can keep the workflow going.
“As Diego said about digitization, we are seeing continuous collaboration enhanced with the communication aspect of meetings day in and day out,” Gal added. “Being able to connect between asynchronous and synchronous with Zoom is addressing the future of work and how it is shaping where we go in the future.”
First, some housekeeping: Thanks to our new corporate parents, TechCrunch has the day off tomorrow, so consider this the last chapter of The Exchange for this week. (The newsletter will go out Saturday as always.) Also, Alex is off next week. Anna is taking on next week’s newsletter and may have a column or two on deck as well.
But before we slow down for a few days, let’s chat about the most recent Y Combinator Demo Day in thematic detail.
If you caught the last few Equity episodes, some of this will be familiar, but we wanted to put a flag in the ground for later reference as we cover startups for the rest of the year.
The Exchange explores startups, markets and money.
What follows is a roundup of trends among Y Combinator startups and how they squared with our expectations.
In a group of nearly 400 startups, you might think it’d be hard to find a category that felt overrepresented, but we’ve managed.
To start, we were surprised by the sheer number of startups in the cohort that were pursuing software models that incorporated no-code and low-code techniques. We expected some, surely, but not the nearly 20 that we compiled this morning.
Startups in the YC batch are building no-code and low-code tools to help developers build faster internal workflows (Tantl), build branded real estate portals (Noloco), sync data between other no-code tools (Whalesync), automate HR (Zazos), and more. Also in the mix were BrightReps, Beau, Alchemy, Hyperseed, Enso, HitPay, Whaly, Muse, Abstra, Lago, Inai and Breadcrumbs.io.
At least 18 companies in the group name-dropped no- and low-code in their pitches. They are taking on a host of industries, from finance and real estate to sales and HR. In short, no- and low-code tools are cropping up in what feels like every sector. It appears that the startup world has decided that helping non-developers build their own tools, workflows and apps is a trend here to stay.
Pixalate raised $18.1 million in growth capital for its fraud protection, privacy and compliance analytics platform that monitors connected television and mobile advertising.
Western Technology Investment and Javelin Venture Partners led the latest funding round, which brings Pixalate’s total funding to $22.7 million to date. This includes a $4.6 million Series A round raised back in 2014, Jalal Nasir, founder and CEO of Pixalate, told TechCrunch.
The company, with offices in Palo Alto and London, analyzes over 5 million apps across five app stores and more 2 billion IP addresses across 300 million connected television devices to detect and report fraudulent advertising activity for its customers. In fact, there are over 40 types of invalid traffic, Nasir said.
Nasir grew up going to livestock shows with his grandfather and learned how to spot defects in animals, and he has carried that kind of insight to Pixalate, which can detect the difference between real and fake users of content and if fraudulent ads are being stacked or hidden behind real advertising that zaps smartphone batteries or siphons internet usage and even ad revenue.
Digital advertising is big business. Nasir cited Association of National Advertisers research that estimated $200 billion will be spent globally in digital advertising this year. This is up from $10 billion a year prior to 2010. Meanwhile, estimated ad fraud will cost the industry $35 billion, he added.
“Advertisers are paying a premium to be in front of the right audience, based on consumption data,” Nasir said. “Unfortunately, that data may not be authorized by the user or it is being transmitted without their consent.”
While many of Pixalate’s competitors focus on first-party risks, the company is taking a third-party approach, mainly due to people spending so much time on their devices. Some of the insights the company has found include that 16% of Apple’s apps don’t have privacy policies in place, while that number is 22% in Google’s app store. More crime and more government regulations around privacy mean that advertisers are demanding more answers, he said.
The new funding will go toward adding more privacy and data features to its product, doubling the sales and customer teams and expanding its office in London, while also opening a new office in Singapore.
The company grew 1,200% in revenue since 2014 and is gathering over 2 terabytes of data per month. In addition to the five app stores Pixalate is already monitoring, Nasir intends to add some of the China-based stores like Tencent and Baidu.
Noah Doyle, managing director at Javelin Venture Partners, is also monitoring the digital advertising ecosystem and said with networks growing, every linkage point exposes a place in an app where bad actors can come in, which was inaccessible in the past, and advertisers need a way to protect that.
“Jalal and Amin (Bandeali) have insight from where the fraud could take place and created a unique way to solve this large problem,” Doyle added. “We were impressed by their insight and vision to create an analytical approach to capturing every data point in a series of transactions — more data than other players in the industry — for comprehensive visibility to help advertisers and marketers maintain quality in their advertising.”
Even without staffing shortages, local merchants have difficulty answering calls while all hands are busy, and Goodcall wants to alleviate some of that burden from America’s 30 million small businesses.
Goodcall’s free cloud-based conversational platform leverages artificial intelligence to manage incoming phone calls and boost customer service for businesses of all sizes. Former Google executive Bob Summers left Google back in January, where he was working on Area 120 — an internal incubator program for experimental projects — to start Goodcall after recognizing the call problem, noting that in fact 60% of the calls that come into merchants go unanswered.
“It’s frustrating for you and for the person calling,” Summers told TechCrunch. “Every missed call is a lost opportunity.”
Goodcall announced its launch Wednesday with $4 million in seed funding led by strategic investors Neo, Foothill Ventures, Merus Capital, Xoogler Ventures, Verissimo Ventures and VSC Ventures, as well as angel investors including Harry Hurst, founder and co-CEO of Pipe.com, and Zillow co-founder Spencer Rascoff.
Goodcall mobile agent. Image Credits: Goodcall
Restaurants, shops and merchants can set up on Goodcall in a matter of minutes and even establish a local phone number to free up an owner’s mobile number from becoming the business’ main line. The service is initially deployed in English and the company has plans to operate in Spanish, French and Hindi by 2022.
Merchants can choose from six different assistant voices and monitor the call logs and what the calls were about. Goodcall can also capture consumer sentiment, Summers said.
The company offers three options, including its freemium service for solopreneurs and business owners, which includes up to 500 minutes per month of Goodcall services for a single phone line. Up to five additional locations and five staff members costs $19 per month for the Pro level, or the Premium level provides unlimited locations and staff for $49 per month.
During the company’s beta period, Goodcall was processing several thousands of calls per month. The new funding will be used to continue to offer the free service, hire engineers and continue product development.
In addition to the funding round, Goodcall is unveiling a partnership with Yelp to tap into its database of local businesses so that those owners and managers can easily deploy Goodcall. Yelp data shows that more than 500,000 businesses opened during the pandemic. The company pulls in from Yelp a merchant’s open hours, location, if they offer Wi-Fi and even their COVID policy.
“We are partnering with Yelp, which has the best data on small businesses, and other large distribution channels to get our product to market,” Summers said. “We are bringing technology into an industry that hasn’t innovated since the 1980s and democratizing conversational AI for small businesses that are the main driver of job creation, and we want to help them grow.”
The digital transformation currently sweeping society has likely reached your favorite local restaurant.
Since 2013, Boston-based Toast has offered bars and eateries a software platform that lets them manage orders, payments and deliveries.
Over the last year, its customers have processed more than $38 billion in gross payment volume, so Alex Wilhelm analyzed the company’s S-1 for The Exchange with great interest.
“Toast was last valued at just under $5 billion when it last raised, per Crunchbase data,” he writes. “And folks are saying that it could be worth $20 billion in its debut. Does that square with the numbers?”
Full Extra Crunch articles are only available to members.
Use discount code ECFriday to save 20% off a one- or two-year subscription.
Airbnb, DoorDash and Coinbase each debuted at past Y Combinator Demo Days; as of this writing, they employ a combined 10,000 people.
Today and tomorrow, TechCrunch reporters will cover the proceedings at YC’s Summer 20201 Demo Day. In addition to writing up founder pitches, they’ll also rank their favorites.
Even remotely, I can feel a palpable sense of excitement radiating from our team — anything can happen at YC Demo Day, so sign up for Extra Crunch to follow the action.
Thanks very much for reading; I hope you have an excellent week.
Senior Editor, TechCrunch
Image Credits: Ron Miller/TechCrunch
In August 2006, AWS activated its EC2 cloud-based virtual computer, a milestone in the cloud infrastructure giant’s development.
“You really can’t overstate what Amazon was able to accomplish,” writes enterprise reporter Ron Miller.
In the 15 years since, EC2 has enabled clients of any size to test and run their own applications on AWS’ virtual machines.
To learn more about a fundamental technological shift that “would help fuel a whole generation of startups,” Ron interviewed EC2 VP Dave Brown, who built and led the Amazon EC2 Frontend team.
Image Credits: Jasmin Merdan (opens in a new window)/ Getty Images
Most managers agree that OKRs foster transparency and accountability, but running a team effectively has different challenges when workers are attending all-hands meetings from their kitchen tables.
Instead of just discussing key metrics before board meetings or performance reviews, make them part of the day-to-day culture, recommends Jeremy Epstein, Gtmhub’s CMO.
“Strengthen your team by creating authentic workplace transparency using numbers as a universal language and providing meaning behind your team’s work.”
Image Credits: Getty Images under an Andrii Yalanskyi (opens in a new window) license
Many founders must overcome a few emotional hurdles before they’re comfortable pitching a potential investor face-to-face.
To alleviate that pressure, Unicorn Capital founder Evan Fisher recommends that entrepreneurs use pre-pitch meetings to build and strengthen relationships before asking for a check:
“This is the ‘we actually aren’t looking for money; we just want to be friends for now’ pitch that gets you on an investor’s radar so that when it’s time to raise your next round, they’ll be far more likely to answer the phone because they actually know who you are.”
Pre-pitches are good for more than curing the jitters: These conversations help founders get a better sense of how VCs think and sometimes lead to serendipitous outcomes.
“Investors are opportunists by necessity,” says Fisher, “so if they like the cut of your business’s jib, you never know — the FOMO might start kicking hard.”
Image Credits: MirageC (opens in a new window) / Getty Images
FischerJordan’s Deeba Goyal and Archita Bhandari break down the pandemic’s impact on alternative lenders, specifically what they had to do to survive the crisis, taking a look at smaller lenders including Credibly, Kabbage, Kapitus and BlueVine.
“Only those who were able to find a way through the complexities of their existing capital sources were able to maintain their performance, and the rest were left to perish or find new funding avenues,” they write.
Image Credits: Nigel Sussman (opens in a new window)
Customer engagement software company Freshworks’ S-1 filing depicts a company that’s experiencing accelerating revenue growth, “a great sign for the health of its business,” reports Alex Wilhelm in this morning’s The Exchange.
“Most companies see their growth rates decline as they scale, as larger denominators make growth in percentage terms more difficult.”
Studying the company’s SEC filing, he found that “Freshworks isn’t a company where we need to cut it lots of slack, as we might with an adjusted EBITDA number. It is going public ready for Big Kid metrics.”
Fifteen years ago this week on August 25, 2006, AWS turned on the very first beta instance of EC2, its cloud-based virtual computers. Today cloud computing, and more specifically infrastructure as a service, is a staple of how businesses use computing, but at that moment it wasn’t a well known or widely understood concept.
The EC in EC2 stands for Elastic Compute, and that name was chosen deliberately. The idea was to provide as much compute power as you needed to do a job, then shut it down when you no longer needed it — making it flexible like an elastic band. The launch of EC2 in beta was preceded by the beta release of S3 storage six months earlier, and both services marked the starting point in AWS’ cloud infrastructure journey.
You really can’t overstate what Amazon was able to accomplish with these moves. It was able to anticipate an entirely different way of computing and create a market and a substantial side business in the process. It took vision to recognize what was coming and the courage to forge ahead and invest the resources necessary to make it happen, something that every business could learn from.
The AWS origin story is complex, but it was about bringing the IT power of the Amazon business to others. Amazon at the time was not the business it is today, but it was still rather substantial and still had to deal with massive fluctuations in traffic such as Black Friday when its website would be flooded with traffic for a short but sustained period of time. While the goal of an e-commerce site, and indeed every business, is attracting as many customers as possible, keeping the site up under such stress takes some doing and Amazon was learning how to do that well.
Those lessons and a desire to bring the company’s internal development processes under control would eventually lead to what we know today as Amazon Web Services, and that side business would help fuel a whole generation of startups. We spoke to Dave Brown, who is VP of EC2 today, and who helped build the first versions of the tech, to find out how this technological shift went down.
The genesis of the idea behind AWS started in the 2000 timeframe when the company began looking at creating a set of services to simplify how they produced software internally. Eventually, they developed a set of foundational services — compute, storage and database — that every developer could tap into.
But the idea of selling that set of services really began to take shape at an executive offsite at Jeff Bezos’ house in 2003. A 2016 TechCrunch article on the origins AWS described how that started to come together:
As the team worked, Jassy recalled, they realized they had also become quite good at running infrastructure services like compute, storage and database (due to those previously articulated internal requirements). What’s more, they had become highly skilled at running reliable, scalable, cost-effective data centers out of need. As a low-margin business like Amazon, they had to be as lean and efficient as possible.
They realized that those skills and abilities could translate into a side business that would eventually become AWS. It would take a while to put these initial ideas into action, but by December 2004, they had opened an engineering office in South Africa to begin building what would become EC2. As Brown explains it, the company was looking to expand outside of Seattle at the time, and Chris Pinkham, who was director in those days, hailed from South Africa and wanted to return home.
Linux is set for a big release this Sunday August 29, setting the stage for enterprise and cloud applications for months to come. The 5.14 kernel update will include security and performance improvements.
A particular area of interest for both enterprise and cloud users is always security and to that end, Linux 5.14 will help with several new capabilities. Mike McGrath, vice president, Linux Engineering at Red Hat told TechCrunch that the kernel update includes a feature known as core scheduling, which is intended to help mitigate processor-level vulnerabilities like Spectre and Meltdown, which first surfaced in 2018. One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit.
“More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained.
Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year-and-a-half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel.
“This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said.
At the heart of the open source Linux operating system that powers much of the cloud and enterprise application delivery is what is known as the Linux kernel. The kernel is the component that provides the core functionality for system operations.
The Linux 5.14 kernel release has gone through seven release candidates over the last two months and benefits from the contributions of 1,650 different developers. Those that contribute to Linux kernel development include individual contributors, as well large vendors like Intel, AMD, IBM, Oracle and Samsung. One of the largest contributors to any given Linux kernel release is IBM’s Red Hat business unit. IBM acquired Red Hat for $34 billion in a deal that closed in 2019.
“As with pretty much every kernel release, we see some very innovative capabilities in 5.14,” McGrath said.
While Linux 5.14 will be out soon, it often takes time until it is adopted inside of enterprise releases. McGrath said that Linux 5.14 will first appear in Red Hat’s Fedora community Linux distribution and will be a part of the future Red Hat Enterprise Linux 9 release. Gerald Pfeifer, CTO for enterprise Linux vendor SUSE, told TechCrunch that his company’s openSUSE Tumbleweed community release will likely include the Linux 5.14 kernel within ‘days’ of the official release. On the enterprise side, he noted that SUSE Linux Enterprise 15 SP4, due next spring, is scheduled to come with Kernel 5.14.
The new Linux update follows a major milestone for the open source operating system, as it was 30 years ago this past Wednesday that creator Linus Torvalds (pictured above) first publicly announced the effort. Over that time Linux has gone from being a hobbyist effort to powering the infrastructure of the internet.
McGrath commented that Linux is already the backbone for the modern cloud and Red Hat is also excited about how Linux will be the backbone for edge computing – not just within telecommunications, but broadly across all industries, from manufacturing and healthcare to entertainment and service providers, in the years to come.
The longevity and continued importance of Linux for the next 30 years is assured in Pfeifer’s view. He noted that over the decades Linux and open source have opened up unprecedented potential for innovation, coupled with openness and independence.
“Will Linux, the kernel, still be the leader in 30 years? I don’t know. Will it be relevant? Absolutely,” he said. “Many of the approaches we have created and developed will still be pillars of technological progress 30 years from now. Of that I am certain.”
Since 2017, Microsoft has offered its Office suite to Chromebook users via the Google Play store, but that is set to come to an end in a few short weeks.
As of Sept. 18, Microsoft is discontinuing support for Office, which includes Word, Excel, PowerPoint, OneNote and Outlook, on Chromebook. Microsoft is not, however, abandoning the popular mobile device altogether. Instead of an app that is downloaded, Microsoft is encouraging users to go to the web instead.
“In an effort to provide the most optimized experience for Chromebook customers, Microsoft apps (Office and Outlook) will be transitioned to web experiences (Office.com and Outlook.com) on September 18, 2021,” Microsoft wrote in a statement emailed to TechCrunch.
Microsoft’s statement also noted that “this transition brings Chromebook customers access to additional and premium features.”
The Microsoft web experience will serve to transition its base of Chromebook users to the Microsoft 365 service, which provides more Office templates and generally more functionality than what the app-based approach provides. The web approach is also more optimized for larger screens than the app.
In terms of how Microsoft wants Chromebook users to get access to Office and Outlook, the plan is for customers to, “..sign in with their personal Microsoft Account or account associated with their Microsoft 365 subscription,” according to the statement. Microsoft has also provided online documentation to show users how to run Office on a Chromebook.
Chromebooks run on Google’s Chrome OS, which is a Linux-based operating system. Chromebooks also enable Android apps to run, as Android is also Linux based, with apps downloaded from Google Play. It’s important to note that while support for Chromebooks is going away, Microsoft is not abandoning other Android-based mobile devices, such as tablets and smartphones.
For those Chromebook users that have already downloaded the Microsoft Office apps, the apps will continue to function after September 18, though they will not receive any support or future updates.
DevOps teams are generally trying to constantly improve themselves, so they can deliver software more quickly and reliably, but often they lack the insights needed to actually make that progress.
Atlassian is now offering users of its Jira Software Cloud platform a series of new capabilities that provide data-driven insights into the development process. Jira is a popular issue and project tracking technology and has included features that help developers and their teams to understand where they are in their workflow.
The new insights go a step beyond what Jira has traditionally provided to its users, with specific insights into different aspects of an agile software development approach. The goal with the new insights is to help organizations better understand what they’re doing right and where development teams can improve, which ultimately results in improved overall efficiency.
“Data is everywhere, but at the same time the insights and the understanding of the actions that you can take are kind of nowhere,” Megan Cook, head of product for Jira Software told TechCrunch. “It’s hard to work smarter in that sense and that’s the big problem that we’re really looking at tackling.”
Cook explained that development teams need access to metrics on their own progress, so they can make smarter data-driven decisions based on what’s happening in real time. She noted that one of the big shifts that Atlassian is now doing with Jira Cloud is bringing data from all the different development tracking tools together into one place where those teams can make decisions.
One example of the insights that Jira Cloud now provides to users is related to sprint commitments. In the agile software development approach, software is developed in what are known as “sprints” as developers race to complete a certain task. With the sprint commitment insight capability, the idea is to help teams understand what amount of work they can handle, based on past performance. The business goal is to help better understand if a team is over- or under-committing to a given sprint.
Another example is providing an issue type breakdown. Cook explained that the way each team can categorize issues can be very personalized. The categories can include different types of projects, such as whether a project is dealing with fixing bugs and technical debt, or if it’s an innovation or growth product, or just an incremental feature update. With the issue type breakdown insight there is a visualization to help teams better understand what types of issues and projects they are working on in a more intuitive approach than before. Cook explained that users could have identified the different issues before via a search functionality, but she emphasized the new insights approach is far easier.
In the coming weeks, Cook said that the company will be adding a few additional insights, including the sprint burndown insight. In the agile software development approach, the burndown is about figuring out what’s left to finish in a sprint. The sprint burndown insight will provide a visual indicator of how much work is left to be done as well as how likely it is that the work will be completed within an allocated amount of time.
Atlassian’s approach to enabling developer teams to work more efficiently is one of the primary values that the company has been building for years, and it has resulted in strong growth overall. Atlassian reported fourth-quarter fiscal 2021 revenue of $560 million, up 30% year-over-year gain on the strength of its developer collaboration and management tools.
Cloud security startup Monad, which offers a platform for extracting and connecting data from various security tools, has launched from stealth with $17 million in Series A funding led by Index Ventures.
Monad was founded on the belief that enterprise cybersecurity is a growing data management challenge, as organizations try to understand and interpret the masses of information that’s siloed within disconnected logs and databases. Once an organization has extracted data from their security tools, Monad’s Security Data Platform enables them to centralize that data within a data warehouse of choice, and normalize and enrich the data so that security teams have the insights they need to secure their systems and data effectively.
“Security is fundamentally a big data problem,” said Christian Almenar, CEO and co-founder of Monad. “Customers are often unable to access their security data in the streamlined manner that DevOps and cloud engineering teams need to build their apps quickly while also addressing their most pressing security and compliance challenges. We founded Monad to solve this security data challenge and liberate customers’ security data from siloed tools to make it accessible via any data warehouse of choice.”
The startup’s Series A funding round, which was also backed by Sequoia Capital, brings its total amount of investment raised to $19 million and comes 12 months after its Sequoia-led seed round. The funds will enable Monad to scale its development efforts for its security data cloud platform, the startup said.
Monad was founded in May 2020 by security veterans Christian Almenar and Jacolon Walker. Almenar previously co-founded serverless security startup Intrinsic which was acquired by VMware in 2019, while Walker served as CISO and security engineer at OpenDoor, Collective Health, and Palantir.
Connecting to all the services and microservices that a modern cloud native enterprise application requires can be a complicated task. It’s an area that startup Solo.io is trying to disrupt with the new release of its Gloo Mesh Enterprise platform.
Based in Cambridge, Massachusetts, Solo has had focus since its founding on a concept known as a service mesh. A service mesh provides an optimized approach to connect different components together in an automated approach, often inside of a Kubernetes cloud native environment.
Idit Levine, founder and CEO at Solo, explained to TechCrunch that she knew from the outset when she started the company in 2017 that it might take a few years till the market understood the concept of the service mesh and why it is needed. That’s why her company also built out an API gateway technology that helps developers connect APIs, which can be different data sources or services.
Until this week, the API and service mesh components of Solo’s Gloo Mesh Enterprise offering were separate technologies, with different configurations and control planes. That is now changing with the integration of both API and service mesh capabilities into a unified service. The integrated capabilities should make it easier to set up and configure all manner of services in the cloud that are running on Kubernetes.
Solo’s service mesh, known as Gloo Mesh, is based on the open source Istio project, which was created by Google. The API product is called Gloo Edge, which uses the open source Envoy project, originally created by ride sharing company Lyft. Levine explained that her team has now used Istio’s plugin architecture to connect with Envoy in an optimized approach.
Levine noted that many users start off with an API gateway and then extend to using the service mesh. With the new Gloo Mesh Enterprise update, she expects customer adoption to accelerate further as Solo will be able to differentiate against rivals in both the service mesh and API management markets.
While the service mesh space is still emerging including rivals such as Tetrate, API gateways are a more mature technology. There are a number of established vendors in the API management space including Kong which has raised $71 million in funding. Back in 2016, Google acquired API vendor Apigee for $625 million and has been expanding the technology in the years since, including the Apigee X platform announced in February of this year.
With the integration of Gloo Edge for API management into Gloo Mesh Enterprise, Solo isn’t quite covering all the bases for API technology, yet. Gloo Edge supports REST based APIs, which are by far the most common today, though it doesn’t support the emerging GraphQL API standard, which is becoming increasingly popular. Levine told us to ‘stay tuned’ for a future GraphQL announcement for Solo and its platform.
Solo has raised a total of $36.5 million across two rounds, with an $11 million Series A in 2018 and a $23 million Series B announced in October 2020. The company’s investors include Redpoint and True Ventures.
Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.
CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV.
Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.
Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.
Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.
CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.
Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.
“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “
It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.
“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.
That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.
“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.
At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.
Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.
The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.
Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.
“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”
Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.
Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.
Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.
Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.
Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.
Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”
“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.
Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.
He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.
“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”
ForgeRock filed its form S-1 with the Securities and Exchange Commission (SEC) this morning as the identity management provider takes the next step toward its IPO.
The company did not provide initial pricing for its shares, which will trade on the New York Stock Exchange under the symbol FORG. The IPO is being led by Morgan Stanley and J.P. Morgan Chase & Co., with the company being valued as high as $4 billion, according to Bloomberg, which is a significant uplift over the $730 million post-money value that PitchBook had for the company after its last round in 2020.
With the ever-increasing volume of cybersecurity attacks against organizations of all sizes, the need to secure and manage user identities is of growing importance. Based in San Francisco, ForgeRock has raised $233 million in funding across multiple rounds. The company’s last round was a $93.5 million Series E announced in April 2020, which was led by Riverwood Capital alongside Accenture Ventures. At that time, CEO Fran Rosch told TechCrunch that the round would be the last before an IPO, which was also what former CEO Mike Ellis told us after the startup’s $88 million Series D in September 2017.
While the timing of its IPO might have been unclear over the last few years, the company has been on a positive trajectory for growth. In its S-1, ForgeRock reported that as of June 30, its annual recurring revenue (ARR) was $155 million, representing 30% year-over-year growth.
While revenue is growing, losses are narrowing as the company reported a $20 million net loss down from $36 million a year ago. There certainly is a whole lot of room to grow, as the company estimates that the total global addressable market for identity services to be worth $71 billion.
Among the many competitors that ForgeRock faces is Okta, which went public in 2017 and has been growing in the years since. In March, Okta acquired cloud identity startup Auth0 for $6.5 billion in a deal that raised a few eyebrows. Another competitor is Ping Identity, which went public in 2019 and is also growing, reporting on August 4 that its ARR hit $279.6 million in its quarter ended June 30, for a 19% year-over-year gain. There have also been a few big exits in the space over the years, including Duo Security, which was acquired by Cisco for $2.35 billion in 2018.
“ForgeRock has a good access management tool and they continue to be a strong player in customer identity and access management (CIAM),” commented Michael Kelley, senior research director at Gartner.
Kelley noted that in 2020, ForgeRock converted most of its core access management services to a SaaS delivery model, which helped the company catch up with the rest of the market that already offered access management as SaaS. Also last year the company expanded into identity governance, introducing a brand new identity, governance and administration (IGA) product.
“I think one of the more interesting products that ForgeRock offers is ForgeRock Trees, which is a no-code/low-code orchestration tool for building complex authentication and authorization journeys for customers, which is particularly helpful in the CIAM market,” Kelly added.
ForgeRock was founded in 2010, but its roots go back even further to an open-source single sign-on project known as OpenSSO that was created by Sun Microsystems in 2005. When Oracle acquired Sun Microsystems in early 2010, a number of its open-source efforts were left to languish, which is what led a number of former Sun employees to start ForgeRock.
Over the last decade, ForgeRock has expanded significantly beyond just providing a single sign-on to providing an identity platform that can handle consumer, enterprise and IoT use-cases. The company’s platform today handles identity and access management as well as identity governance.
The ability to scale is a key selling point that ForgeRock makes in the S-1, noting that its platform can handle over 60,000 user-based access transactions per second per customer.
“As of June 30, 2021, we had four customers with 100 million or more licensed identities, the company stated in the S-1. “Our ability to serve mission-critical needs in complex environments for large customers enables us to grow our base of large customers and expand within each of them. “
Microsoft is moving into the next phase of its plan to bring Xbox Cloud Gaming to as many devices as possible, and it’s one of the most important steps yet. Starting this holiday season, Xbox Game Pass Ultimate subscribers will have access to cloud gaming on Xbox Series X/S and Xbox One consoles.
The company, which made the announcement during its Gamescom showcase, said you’ll be able to fire up more than 100 games without having to download them first. At some point in the future, Xbox One owners can play some Series X/S games through the cloud, such as Microsoft Flight Simulator. You’ll know a title is cloud gaming-compatible if you see a cloud icon next to it in the Game Pass library. Microsoft is targeting 1080p gameplay at 60 frames per second.
Xbox Cloud Gaming is already available on phones, tablets and PC. Microsoft is also working on Xbox game streaming sticks as well as a smart TV cloud gaming app. This summer, the company started transitioning cloud gaming onto beefier Xbox Series X hardware after launching the service on Xbox One S-based blade servers.
Editor’s note: This post originally appeared on Engadget.
Less than a year after raising its $6 million seed funding round, Tel Aviv and Sunnyvale-based startup Build.security is being acquired by Elastic. Financial terms of the deal are not being publicly disclosed at this time. The deal is expected to close in Elastic’s Q2 FY22, ending Oct. 31, 2021.
In an email to TechCrunch, Ash Kulkarni, chief product officer at Elastic, said that once the acquisition closes, the build.security technical team will continue as a unit in the Elastic Security organization. Kulkarni added that the acquisition will also become the foundation for a growing Elastic presence in Israel, with Amit Kanfer, co-founder and CEO of build.security set to become the site lead for the region.
Build.security is focused on security policy management for applications. A core element of the company’s technology approach is the Open Policy Agent (OPA) open source project, which is part of the Cloud Native Computing Foundation (CNCF), which is also home to Kubernetes. OPA was originally started by startup Styra, which itself has raised $40 million in funding to help build out policy management and authorization technology. Part of OPA is the Rego query language which is used to structure security and authorization configuration policies.
“We see policy as a fundamental cornerstone of security,” Kulkarni said. “OPA and Rego provide an open, standards-based way to define, manage, and enforce policies everywhere.”
Kulkarni noted that security policy technology is complementary to Elastic’s efforts in security and observability. He added that Elastic sees potential for using OPA and the technology that build.security has built on top of OPA to power deployment time, and in the future, build-time security for cloud-native environments.
YL Venture partner John Brennan who helped to lead the seed round of build.security sees the acquisition as being a good fit for both companies, as they are both creating solutions for developers that are based on open source technologies.
“This move by a market leader like Elastic validates the need for transformation in the authorization space,” Brennan said. “This partnership will accelerate build.security’s shift left vision of efficiently embedding access protection from the start, rather than trying to bolt it on after the fact or, worse, ignoring it completely.”
Elastic is known for its Elastic Stack, which provides Elasticsearch search capability, Logstash log monitoring and Kibana data visualization. In recent years the company has expanded into the security space, acquiring Endgame Security in 2019 for $234 million. On Aug. 3, Elastic announced its Limitless XDR capabilities which brings together endpoint security with security information and event management (SIEM).
With its new acquisition, Kulkarni said the goal is to go even deeper into security moving toward cloud security enforcement. He explained that after the acquisition closes and as the technology is integrated, users will be able to leverage the Elastic Stack to visualize and manage compliance policies and policy decisions at scale. An initial use-case for the build.security technology will be developing a Kubernetes security and compliance product based on OPA.