FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The time Animoto almost brought AWS to its knees

By Ron Miller

Today, Amazon Web Services is a mainstay in the cloud infrastructure services market, a $60 billion juggernaut of a business. But in 2008, it was still new, working to keep its head above water and handle growing demand for its cloud servers. In fact, 15 years ago last week, the company launched Amazon EC2 in beta. From that point forward, AWS offered startups unlimited compute power, a primary selling point at the time.

EC2 was one of the first real attempts to sell elastic computing at scale — that is, server resources that would scale up as you needed them and go away when you didn’t. As Jeff Bezos said in an early sales presentation to startups back in 2008, “you want to be prepared for lightning to strike, […] because if you’re not that will really generate a big regret. If lightning strikes, and you weren’t ready for it, that’s kind of hard to live with. At the same time you don’t want to prepare your physical infrastructure, to kind of hubris levels either in case that lightning doesn’t strike. So, [AWS] kind of helps with that tough situation.”

An early test of that value proposition occurred when one of their startup customers, Animoto, scaled from 25,000 to 250,000 users in a 4-day period in 2008 shortly after launching the company’s Facebook app at South by Southwest.

At the time, Animoto was an app aimed at consumers that allowed users to upload photos and turn them into a video with a backing music track. While that product may sound tame today, it was state of the art back in those days, and it used up a fair amount of computing resources to build each video. It was an early representation of not only Web 2.0 user-generated content, but also the marriage of mobile computing with the cloud, something we take for granted today.

For Animoto, launched in 2006, choosing AWS was a risky proposition, but the company found trying to run its own infrastructure was even more of a gamble because of the dynamic nature of the demand for its service. To spin up its own servers would have involved huge capital expenditures. Animoto initially went that route before turning its attention to AWS because it was building prior to attracting initial funding, Brad Jefferson, co-founder and CEO at the company explained.

“We started building our own servers, thinking that we had to prove out the concept with something. And as we started to do that and got more traction from a proof-of-concept perspective and started to let certain people use the product, we took a step back, and were like, well it’s easy to prepare for failure, but what we need to prepare for success,” Jefferson told me.

Going with AWS may seem like an easy decision knowing what we know today, but in 2007 the company was really putting its fate in the hands of a mostly unproven concept.

“It’s pretty interesting just to see how far AWS has gone and EC2 has come, but back then it really was a gamble. I mean we were talking to an e-commerce company [about running our infrastructure]. And they’re trying to convince us that they’re going to have these servers and it’s going to be fully dynamic and so it was pretty [risky]. Now in hindsight, it seems obvious but it was a risk for a company like us to bet on them back then,” Jefferson told me.

Animoto had to not only trust that AWS could do what it claimed, but also had to spend six months rearchitecting its software to run on Amazon’s cloud. But as Jefferson crunched the numbers, the choice made sense. At the time, Animoto’s business model was for free for a 30 second video, $5 for a longer clip, or $30 for a year. As he tried to model the level of resources his company would need to make its model work, it got really difficult, so he and his co-founders decided to bet on AWS and hope it worked when and if a surge of usage arrived.

That test came the following year at South by Southwest when the company launched a Facebook app, which led to a surge in demand, in turn pushing the limits of AWS’s capabilities at the time. A couple of weeks after the startup launched its new app, interest exploded and Amazon was left scrambling to find the appropriate resources to keep Animoto up and running.

Dave Brown, who today is Amazon’s VP of EC2 and was an engineer on the team back in 2008, said that “every [Animoto] video would initiate, utilize and terminate a separate EC2 instance. For the prior month they had been using between 50 and 100 instances [per day]. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3,400 instances as of Friday morning.” Animoto was able to keep up with the surge of demand, and AWS was able to provide the necessary resources to do so. Its usage eventually peaked at 5000 instances before it settled back down, proving in the process that elastic computing could actually work.

At that point though, Jefferson said his company wasn’t merely trusting EC2’s marketing. It was on the phone regularly with AWS executives making sure their service wouldn’t collapse under this increasing demand. “And the biggest thing was, can you get us more servers, we need more servers. To their credit, I don’t know how they did it — if they took away processing power from their own website or others — but they were able to get us where we needed to be. And then we were able to get through that spike and then sort of things naturally calmed down,” he said.

The story of keeping Animoto online became a main selling point for the company, and Amazon was actually the first company to invest in the startup besides friends and family. It raised a total of $30 million along the way, with its last funding coming in 2011. Today, the company is more of a B2B operation, helping marketing departments easily create videos.

While Jefferson didn’t discuss specifics concerning costs, he pointed out that the price of trying to maintain servers that would sit dormant much of the time was not a tenable approach for his company. Cloud computing turned out to be the perfect model and Jefferson says that his company is still an AWS customer to this day.

While the goal of cloud computing has always been to provide as much computing as you need on demand whenever you need it, this particular set of circumstances put that notion to the test in a big way.

Today the idea of having trouble generating 3,400 instances seems quaint, especially when you consider that Amazon processes 60 million instances every day now, but back then it was a huge challenge and helped show startups that the idea of elastic computing was more than theory.

All the reasons why you should launch a credit or debit card

By Ryan Lawler

Over the previous two or three years we’ve seen an explosion of new debit and credit card products come to market from consumer and B2B fintech startups, as well as companies that we might not traditionally think of as players in the financial services industry.

On the consumer side, that means companies like Venmo or PayPal offering debit cards as a new way for users to spend funds in their accounts. In the B2B space, the availability of corporate card issuing by startups like Brex and Ramp has ushered in new expense and spend management options. And then there is the growth of branded credit and debit cards among brands and sports teams.

But if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users.

To learn more about launching a card product, TechCrunch spoke with executives from Marqeta, Expensify, Synctera and Cardless about the pros and cons of launching a card product. So without further ado, here are all the reasons you should think about doing so, and one big reason why you might not want to.

Because it’s (relatively) easy

Probably the biggest reason we’ve seen so many new fintech and non-fintech companies rush to offer debit and credit cards to customers is simply that it’s easier than ever for them to do so. The launch and success of businesses like Marqeta has made card issuance by API developer friendly, which lowered the barrier to entry significantly over the last half-decade.

“The reason why this is happening is because the ‘fintech 1.0 infrastructure’ has succeeded,” Salman Syed, Marqeta’s SVP and GM of North America, said. “When you’ve got companies like [ours] out there, it’s just gotten a lot easier to be able to put a card product out.”

While noting that there have been good options for card issuance and payment processing for at least the last five or six years, Expensify Chief Operating Officer Anu Muralidharan said that a proliferation of technical resources for other pieces of fintech infrastructure has made the process of greenlighting a card offering much easier over the years.

What It'll Take to Get Power Back in New Orleans After Ida

By Lily Hay Newman
It could take weeks to get the lights on in parts of Louisiana, but the playbook on how to do it is clear.

How Amazon EC2 grew from a notion into a foundational element of cloud computing

By Ron Miller

Fifteen years ago this week on August 25, 2006, AWS turned on the very first beta instance of EC2, its cloud-based virtual computers. Today cloud computing, and more specifically infrastructure as a service, is a staple of how businesses use computing, but at that moment it wasn’t a well known or widely understood concept.

The EC in EC2 stands for Elastic Compute, and that name was chosen deliberately. The idea was to provide as much compute power as you needed to do a job, then shut it down when you no longer needed it — making it flexible like an elastic band. The launch of EC2 in beta was preceded by the beta release of S3 storage six months earlier, and both services marked the starting point in AWS’ cloud infrastructure journey.

You really can’t overstate what Amazon was able to accomplish with these moves. It was able to anticipate an entirely different way of computing and create a market and a substantial side business in the process. It took vision to recognize what was coming and the courage to forge ahead and invest the resources necessary to make it happen, something that every business could learn from.

The AWS origin story is complex, but it was about bringing the IT power of the Amazon business to others. Amazon at the time was not the business it is today, but it was still rather substantial and still had to deal with massive fluctuations in traffic such as Black Friday when its website would be flooded with traffic for a short but sustained period of time. While the goal of an e-commerce site, and indeed every business, is attracting as many customers as possible, keeping the site up under such stress takes some doing and Amazon was learning how to do that well.

Those lessons and a desire to bring the company’s internal development processes under control would eventually lead to what we know today as Amazon Web Services, and that side business would help fuel a whole generation of startups. We spoke to Dave Brown, who is VP of EC2 today, and who helped build the first versions of the tech, to find out how this technological shift went down.

Sometimes you get a great notion

The genesis of the idea behind AWS started in the 2000 timeframe when the company began looking at creating a set of services to simplify how they produced software internally. Eventually, they developed a set of foundational services — compute, storage and database — that every developer could tap into.

But the idea of selling that set of services really began to take shape at an executive offsite at Jeff Bezos’ house in 2003. A 2016 TechCrunch article on the origins AWS described how that started to come together:

As the team worked, Jassy recalled, they realized they had also become quite good at running infrastructure services like compute, storage and database (due to those previously articulated internal requirements). What’s more, they had become highly skilled at running reliable, scalable, cost-effective data centers out of need. As a low-margin business like Amazon, they had to be as lean and efficient as possible.

They realized that those skills and abilities could translate into a side business that would eventually become AWS. It would take a while to put these initial ideas into action, but by December 2004, they had opened an engineering office in South Africa to begin building what would become EC2. As Brown explains it, the company was looking to expand outside of Seattle at the time, and Chris Pinkham, who was director in those days, hailed from South Africa and wanted to return home.

Monad emerges from stealth with $17M to solve the cybersecurity big data problem

By Carly Page

Cloud security startup Monad, which offers a platform for extracting and connecting data from various security tools, has launched from stealth with $17 million in Series A funding led by Index Ventures. 

Monad was founded on the belief that enterprise cybersecurity is a growing data management challenge, as organizations try to understand and interpret the masses of information that’s siloed within disconnected logs and databases. Once an organization has extracted data from their security tools, Monad’s Security Data Platform enables them to centralize that data within a data warehouse of choice, and normalize and enrich the data so that security teams have the insights they need to secure their systems and data effectively.

“Security is fundamentally a big data problem,” said Christian Almenar, CEO and co-founder of Monad. “Customers are often unable to access their security data in the streamlined manner that DevOps and cloud engineering teams need to build their apps quickly while also addressing their most pressing security and compliance challenges. We founded Monad to solve this security data challenge and liberate customers’ security data from siloed tools to make it accessible via any data warehouse of choice.”

The startup’s Series A funding round, which was also backed by Sequoia Capital, brings its total amount of investment raised to  $19 million and comes 12 months after its Sequoia-led seed round. The funds will enable Monad to scale its development efforts for its security data cloud platform, the startup said.

Monad was founded in May 2020 by security veterans Christian Almenar and Jacolon Walker. Almenar previously co-founded serverless security startup Intrinsic which was acquired by VMware in 2019, while Walker served as CISO and security engineer at OpenDoor, Collective Health, and Palantir.

Elastic acquisition spree continues as it acquires security startup CMD

By Sean Michael Kerner

Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.

CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV. 

Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.

Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.

Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.

CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.

Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.

“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “

It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.

“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.

That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.

“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.

Insight Partners leads $30M round into Metabase, developing enterprise business intelligence tools

By Christine Hall

Open-source business intelligence company Metabase announced Thursday a $30 million Series B round led by Insight Partners.

Existing investors Expa and NEA joined in on the round, which gives the San Francisco-based company a total of $42.5 million in funding since it was founded in 2015. Metabase previously raised $8 million in Series A funding back in 2019, led by NEA.

Metabase was developed within venture studio Expa and spun out as an easy way for people to interact with data sets, co-founder and CEO Sameer Al-Sakran told TechCrunch.

“When someone wants access to data, they may not know what to measure or how to use it, all they know is they have the data,” Al-Sakran said. “We provide a self-service access layer where they can ask a question, Metabase scans the data and they can use the results to build models, create a dashboard and even slice the data in ways they choose without having an analyst build out the database.”

He notes that not much has changed in the business intelligence realm since Tableau came out more than 15 years ago, and that computers can do more for the end user, particularly to understand what the user is going to do. Increasingly, open source is the way software and information wants to be consumed, especially for the person that just wants to pull the data themselves, he added.

George Mathew, managing director of Insight Partners, believes we are seeing the third generation of business intelligence tools emerging following centralized enterprise architectures like SAP, then self-service tools like Tableau and Looker and now companies like Metabase that can get users to discovery and insights quickly.

“The third generation is here and they are leading the charge to insights and value,” Mathew added. “In addition, the world has moved to the cloud, and BI tools need to move there, too. This generation of open source is a better and greater example of all three of those.”

To date, Metabase has been downloaded 98 million times and used by more than 30,000 companies across 200 countries. The company pursued another round of funding after building out a commercial offering, Metabase Enterprise, that is doing well, Al-Sakran said.

The new funding round enables the company to build out a sales team and continue with product development on both Metabase Enterprise and Metabase Cloud. Due to Metabase often being someone’s first business intelligence tool, he is also doubling down on resources to help educate customers on how to ask questions and learn from their data.

“Open source has changed from floppy disks to projects on the cloud, and we think end users have the right to see what they are running,” Al-Sakran said. “We are continuing to create new features and improve performance and overall experience in efforts to create the BI system of the future.

 

There could be more to the Salesforce+ video streaming service than meets the eye

By Ron Miller

When Salesforce announced its new business video streaming service called Salesforce+ this week, everyone had a reaction. While not all of it was positive, some company watchers also wondered if there was more to this announcement than meets the eye.

If you look closely, the new initiative suggests that Salesforce wants to take a bite out of LinkedIn and other SaaS content platforms and publishers. The video streaming service could be a launch point for a broader content platform, where its partners are producing their own content and using Salesforce+ infrastructure to help them advertise to and cultivate their own customers.

The company has, after all, done exactly this sort of thing with its online marketplaces and industry events to great success. Salesforce generated almost $6 billion in its most recent quarterly earnings report. That mostly comes from selling its sales, marketing and service software, not any kind of content production, but it has lots of experience putting on Dreamforce, its massive annual customer event, as well as smaller events throughout the year around the world.

The video streaming service could be a launch point for a broader content platform, where its partners are producing their own content and using Salesforce+ infrastructure to help them advertise to and cultivate their own customers.

On its face, Salesforce+ is a giant, ambitious and quite expensive content marketing play. The company reportedly has hired a large professional staff to produce and manage the content, and built a broadcasting and production studio designed to produce quality shows in-house. It believes that by launching with content from Dreamforce, its highly successful customer conference, attended by tens of thousands people every year pre-pandemic, it can prime the viewing pump and build audience momentum that way, perhaps even using celebrities as it often does at its events to drive audience. It is less clear about the long-term business goals.

Disaster recovery can be an effective way to ease into the cloud

By Ram Iyer
Jeff Ton Contributor
Jeff Ton is the founder of Ton Enterprises and strategic IT adviser to InterVision, a leading strategic service provider and premier consulting partner in the Amazon Web Services (AWS) Partner Network (APN).

Operating in the cloud is soon going to be a reality for many businesses whether they like it or not. Points of contention with this shift often arise from unfamiliarity and discomfort with cloud operations. However, cloud migrations don’t have to be a full lift and shift.

Instead, leaders unfamiliar with the cloud should start by moving over their disaster recovery program to the cloud, which helps to gain familiarity and understanding before a full migration of production workloads.

What is DRaaS?

Disaster recovery as a service (DRaaS) is cloud-based disaster recovery delivered as a service to organizations in a self-service, partially managed or fully managed service model. The agility of DR in the cloud affords businesses a geographically diverse location to failover operations and run as close to normal as possible following a disruptive event. DRaaS emphasizes speed of recovery so that this failover is as seamless as possible. Plus, technology teams can offload some of the more burdensome aspects of maintaining and testing their disaster recovery.

When it comes to disaster recovery testing, allow for extra time to let your IT staff learn the ins and outs of the cloud environment.

DRaaS is a perfect candidate for a first step into the cloud for five main reasons:

  • Using DRaaS helps leaders get accustomed to the ins and outs of cloud before conducting a full production shift.
  • Testing cycles of the DRaaS solution allows IT teams to see firsthand how their applications will operate in a cloud environment, enabling them to identify the applications that will need a full or partial refactor before migrating to the cloud.
  • With DRaaS, technology leaders can demonstrate an early win in the cloud without risking full production.
  • DRaaS success helps gain full buy-in from stakeholders, board members and executives.
  • The replication tools that DRaaS uses are sometimes the same tools used to migrate workloads for production environments — this helps the technology team practice their cloud migration strategy.

Steps to start your DRaaS journey to the cloud

Define your strategy

Do your research to determine if DRaaS is right for you given your long-term organizational goals. You don’t want to start down a path to one cloud environment if that cloud isn’t aligned with your company’s objectives, both for the short and long term. Having cross-functional conversations among business units and with company executives will assist in defining and iterating your strategy.

VCs are betting big on Kubernetes: Here are 5 reasons why

By Ram Iyer
Ben Ofiri Contributor
Ben Ofiri is the co-founder and CEO of the Kubernetes troubleshooting platform Komodor. He previously worked at Google, where he served as product lead for the company’s flagship conversational AI project, Google Duplex.

I worked at Google for six years. Internally, you have no choice — you must use Kubernetes if you are deploying microservices and containers (it’s actually not called Kubernetes inside of Google; it’s called Borg). But what was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations.

For good reason. One person with a laptop can now accomplish what used to take a large team of engineers. At times, Kubernetes can feel like a superpower, but with all of the benefits of scalability and agility comes immense complexity. The truth is, very few software developers truly understand how Kubernetes works under the hood.

I like to use the analogy of a watch. From the user’s perspective, it’s very straightforward until it breaks. To actually fix a broken watch requires expertise most people simply do not have — and I promise you, Kubernetes is much more complex than your watch.

How are most teams solving this problem? The truth is, many of them aren’t. They often adopt Kubernetes as part of their digital transformation only to find out it’s much more complex than they expected. Then they have to hire more engineers and experts to manage it, which in a way defeats its purpose.

Where you see containers, you see Kubernetes to help with orchestration. According to Datadog’s most recent report about container adoption, nearly 90% of all containers are orchestrated.

All of this means there is a great opportunity for DevOps startups to come in and address the different pain points within the Kubernetes ecosystem. This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.

In that sense, there’s never been a better time for VCs to invest in this ecosystem. It’s my belief that Kubernetes is becoming the new Linux: 96.4% of the top million web servers’ operating systems are Linux. Similarly, Kubernetes is trending to become the de facto operating system for modern, cloud-native applications. It is already the most popular open-source project within the Cloud Native Computing Foundation (CNCF), with 91% of respondents using it — a steady increase from 78% in 2019 and 58% in 2018.

While the technology is proven and adoption is skyrocketing, there are still some fundamental challenges that will undoubtedly be solved by third-party solutions. Let’s go deeper and look at five reasons why we’ll see a surge of startups in this space.

 

Containers are the go-to method for building modern apps

Docker revolutionized how developers build and ship applications. Container technology has made it easier to move applications and workloads between clouds. It also provides as much resource isolation as a traditional hypervisor, but with considerable opportunities to improve agility, efficiency and speed.

Cloud infrastructure market kept growing in Q2, reaching $42B

By Ron Miller

It’s often said in baseball that a prospect has a high ceiling, reflecting the tremendous potential of a young player with plenty of room to get better. The same could be said for the cloud infrastructure market, which just keeps growing with little sign of slowing down any time soon. The market hit $42 billion in total revenue with all major vendors reporting, up $2 billion from Q1.

Synergy Research reports that the revenue grew at a speedy 39% clip, the fourth consecutive quarter that it has increased. AWS led the way per usual, but Microsoft continued growing at a rapid pace and Google also kept the momentum going.

AWS continues to defy market logic, actually increasing growth by 5% over the previous quarter at 37%, an amazing feat for a company with the market maturity of AWS. That accounted for $14.81 billion in revenue for Amazon’s cloud division, putting it close to a $60 billion run rate, good for a market leading 33% share. While that share has remained fairly steady for a number of years, the revenue continues to grow as the market pie grows ever larger.

Microsoft grew even faster at 51%, and while Microsoft cloud infrastructure data isn’t always easy to nail down, with 20% of market share according to Synergy Research, that puts it at $8.4 billion as it continues to push upward with revenue up from $7.8 billion last quarter.

Google too continued its slow and steady progress under the leadership of Thomas Kurian, leading the growth numbers with a 54% increase in cloud revenue in Q2 on revenue of $4.2 billion, good for 10% market share, the first time Google Cloud has reached double figures in Synergy’s quarterly tracking data. That’s up from $3.5 billion last quarter.

Synergy Research cloud infrastructure market share chart.

Image Credits: Synergy Research

After the Big 3, Alibaba held steady over Q1 at 6% (but will only report this week) with IBM falling a point from Q1 to 4% as Big Blue continues to struggle in pure infrastructure as it makes the transition to more of a hybrid cloud management player.

John Dinsdale, chief analyst at Synergy, says that the big three are spending big to help fuel this growth. “Amazon, Microsoft and Google in aggregate are typically investing over $25 billion in capex per quarter, much of which is going towards building and equipping their fleet of over 340 hyperscale data centers,” he said in a statement.

Meanwhile Canalys had similar numbers, but saw the overall market slightly higher at $47 billion. Their market share broke down to Amazon with 31%, Microsoft with 22% and Google with 8% of that total number.

Canalys analyst Blake Murray says that part of the reason companies are shifting workloads to the clouds is to help achieve environmental sustainability goals as the cloud vendors are working toward using more renewable energy to run their massive data centers.

“The best practices and technology utilized by these companies will filter to the rest of the industry, while customers will increasingly use cloud services to relieve some of their environmental responsibilities and meet sustainability goals,” Murray said in a statement.

Regardless of whether companies are moving to the cloud to get out of the data center business or because they hope to piggyback on the sustainability efforts of the big 3, companies are continuing a steady march to the cloud. With some estimates of worldwide cloud usage at around 25%, the potential for continued growth remains strong, especially with many markets still untapped outside the U.S.

That bodes well for the big three and for other smaller operators who can find a way to tap into slices of market share that add up to big revenue. “There remains a wealth of opportunity for smaller, more focused cloud providers, but it can be hard to look away from the eye-popping numbers coming out of the big three,” Dinsdale said.

In fact, it’s hard to see the ceiling for these companies any time in the foreseeable future.

Tech leaders can be the secret weapon for supercharging ESG goals

By Ram Iyer
Jeff Sternberg Contributor
Jeff Sternberg is a technical director in the Office of the CTO (OCTO) at Google Cloud, a team of technologists and industry experts that help Google Cloud's customers solve challenging problems and disrupt their industries.

Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.

What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.

Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.

Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.

CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.

Reducing environmental impact

As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.

Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.

Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.

So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”

Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.

Make social impact

Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.

Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.

When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.

Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.

It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.

Impact governance

Promoting governance does not stop with the board and CEO; CTOs play an important role, too.

Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.

It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.

These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.

Platform as a service startup Porter aims to become go-to for deploying, managing cloud-based apps

By Christine Hall

By the time Porter co-founders Trevor Shim and Justin Rhee decided to build a company around DevOps, the pair were well versed in doing remote development on Kubernetes. And like other users, they were consistently getting burnt by the technology.

They realized that for all of the benefits, the technology was there, but users were having to manage the complexity of hosting solutions as well as incurring the costs associated with a big DevOps team, Rhee told TechCrunch.

They decided to build a solution externally and went through Y Combinator’s Summer 2020 batch, where they found other startup companies trying to do the same.

Today, Porter announced $1.5 million in seed funding from Venrock, Translink Capital, Soma Capital and several angel investors. Its goal is to build a platform as a service that any team can use to manage applications in its own cloud, essentially delivering the full flexibility of Kubernetes through a Heroku-like experience.

Why Heroku? It is the hosting platform that developers are used to, and not just small companies, but also later-stage companies. When they want to move to Amazon Web Services, Google Cloud or DigitalOcean, Porter will be that bridge, Shim said.

However, while Heroku is still popular, the pair said companies are thinking the platform is getting outdated because it is standing still technology-wise. Each year, companies move on from the platform due to technical limitations and cost, Rhee said.

A big part of the bet Porter is taking is not charging users for hosting, and its cost is a pure SaaS product, he said. They aren’t looking to be resellers, so companies can use their own cloud, but Porter will provide the automation and users can pay with their AWS and GCP credits, which gives them flexibility.

A common pattern is a move into Kubernetes, but “the zinger we talk about” is if Heroku was built in 2021, it would have been built on Kubernetes, Shim added.

“So we see ourselves as a successor’s successor,” he said.

To be that bridge, the company will use the new funding to increase its engineering bandwidth with the goal of “becoming the de facto standard for all startups.” Shim said.

Porter’s platform went live in February, and in six months became the sixth-fastest growing open-source platform download on GitHub, said Ethan Batraski, partner at Venrock. He met the company through YC and was “super impressed with Rhee’s and Shim’s vision.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

His firm has long focused on data infrastructure and is seeing the stack get more complex, saying “at the same time, more developers are wanting to build out an app over a week, and scale it to millions of users, but that takes people resources. With Kubernetes it can turn everyone into an expert developer without them knowing it.”

4 key areas SaaS startups must address to scale infrastructure for the enterprise

By Ram Iyer
Prashant Pandey Contributor
Prashant Pandey is the head of engineering at Asana, a leading work management platform for teams. Prior to Asana, Prashant started and led the Bay Area team building Amazon DynamoDB, a fully managed NoSQL database service.

Startups and SMBs are usually the first to adopt many SaaS products. But as these customers grow in size and complexity — and as you rope in larger organizations — scaling your infrastructure for the enterprise becomes critical for success.

Below are four tips on how to advance your company’s infrastructure to support and grow with your largest customers.

Address your customers’ security and reliability needs

If you’re building SaaS, odds are you’re holding very important customer data. Regardless of what you build, that makes you a threat vector for attacks on your customers. While security is important for all customers, the stakes certainly get higher the larger they grow.

Given the stakes, it’s paramount to build infrastructure, products and processes that address your customers’ growing security and reliability needs. That includes the ethical and moral obligation you have to make sure your systems and practices meet and exceed any claim you make about security and reliability to your customers.

Here are security and reliability requirements large customers typically ask for:

Formal SLAs around uptime: If you’re building SaaS, customers expect it to be available all the time. Large customers using your software for mission-critical applications will expect to see formal SLAs in contracts committing to 99.9% uptime or higher. As you build infrastructure and product layers, you need to be confident in your uptime and be able to measure uptime on a per customer basis so you know if you’re meeting your contractual obligations.

While it’s hard to prioritize asks from your largest customers, you’ll find that their collective feedback will pull your product roadmap in a specific direction.

Real-time status of your platform: Most larger customers will expect to see your platform’s historical uptime and have real-time visibility into events and incidents as they happen. As you mature and specialize, creating this visibility for customers also drives more collaboration between your customer operations and infrastructure teams. This collaboration is valuable to invest in, as it provides insights into how customers are experiencing a particular degradation in your service and allows for you to communicate back what you found so far and what your ETA is.

Backups: As your customers grow, be prepared for expectations around backups — not just in terms of how long it takes to recover the whole application, but also around backup periodicity, location of your backups and data retention (e.g., are you holding on to the data too long?). If you’re building your backup strategy, thinking about future flexibility around backup management will help you stay ahead of these asks.

Microsoft’s cyber startup spending spree continues with CloudKnox acquisition

By Carly Page

Microsoft has acquired identity and access management (IAM) startup CloudKnox Security, the tech giant’s fourth cybersecurity acquisition this year.

The deal, the terms of which were not disclosed, is the latest cybersecurity acquisition by Microsoft, which just last week announced that it’s buying threat intelligence startup RiskIQ. The firm also recently acquired IoT security startups CyberX and Refirm Labs as it moved to beef up its security portfolio. Security is big business for Microsoft, which made more than $10 billion in security-related revenue in 2020 — a 40% increase from the year prior.

CloudKnox, which was founded in 2015 and emerged from stealth two years later, helps organizations to enforce least-privilege principles to reduce risk and help prevent security breaches. The startup had raised $22.8 million prior to the acquisition, with backing from ClearSky, Sorenson Ventures, Dell Technologies Capital, and Foundation Capital. 

The company’s activity-based authorization service will equip Azure Active Directory customers with “granular visibility, continuous monitoring and automated remediation for hybrid and multi-cloud permissions,” according to a blog post by Joy Chik, corporate vice president of identity at Microsoft. 

Chik said that while organizations were reaping the benefits of cloud adoption, particularly as they embrace flexible working models, they often struggled to assess, prevent and enforce privileged access across hybrid and multi-cloud environments.

“CloudKnox offers complete visibility into privileged access,” Chik said. “It helps organizations right-size permissions and consistently enforce least-privilege principles to reduce risk, and it employs continuous analytics to help prevent security breaches and ensure compliance. This strengthens our comprehensive approach to cloud security.”

In addition to Azure Active Directory, Microsoft also plans to integrate CloudKnox with its other cloud security services including 365 Defender, Azure Defender, and Azure Sentinel.

Commenting on the deal, Balaji Parimi, CloudKnox founder and CEO, said: “By joining Microsoft, we can unlock new synergies and make it easier for our mutual customers to protect their multi-cloud and hybrid environments and strengthen their security posture.”

The Zoom-Five9 deal is a big bet for the video conferencing company

By Alex Wilhelm

Zoom, a well-known video conferencing company, will buy Five9, a company that sells software allowing users to reach customers across platforms and record notes on their interactions. As TechCrunch noted this morning, the deal is merely “Zoom’s latest attempt to expand its offerings,” having “added several office collaboration products, a cloud phone system, and an all-in-one home communications appliance” to its larger software stack in recent quarters. Both companies are publicly traded.

But the Five9 deal is in a different league than its previous purchases. Indeed, the $14.7 billion transaction represents a material percentage of Zoom’s own value. That tells us that the company is not simply making a purchase in Five9, but is instead making a large bet that the combination of its business and that of the smaller company will prove rather accretive.

Zoom is worth $101.8 billion as of the time of writing, with the company’s shares slipping just over 4% today; the stock market is largely off this morning, making Zoom’s share price movements less indicative of investor reaction to the deal that we might think. Still, it doesn’t appear that the street is excessively thrilled by news of Zoom’s purchase.

That perspective may be reasonable, given that the Five9 transaction is worth nearly 15% of Zoom’s total market cap; the company is betting a little less than a sixth of its value on a single wager.

Not that Five9 doesn’t bring a lot to the table. In its most recent quarter, Five9 posted $138 million in total revenue, growth of 45% on a year-over-year basis.

Still, as Zoom reported in an investor deck concerning the transaction, the smaller company’s growth rate pales compared to its own:

Image Credits: Zoom investor deck

This is where the deal gets interesting. Note that Five9’s revenue growth rate is a fraction of Zoom’s. The larger company, then, is buying a piece of revenue that is growing slower than its core business. That’s a bit of a flip from many transactions that we see, in which the smaller company being acquired is growing faster than the acquiring entity’s own operations.

Why would Zoom buy slower growth for so very much money? One thing to consider is that Five9’s most recent quarterly growth rate is quicker than the growth rate that it posted over the last 12 months. That implies that Five9 has room to accelerate growth compared to its historical pace, bringing its total pace of top-line expansion closer to what Zoom itself manages.

❌