FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Thoma Bravo acquires Flexera for second time paying $2.85B

By Ron Miller

Thoma Bravo must really like Flexera, an IT asset management company out of Chicago. The private equity firm bought the company for the second time today. Sources told TechCrunch the price was $2.85 billion.

Technically, Thoma Bravo is getting a majority stake in the company, buying it from previous owners TA Associates and Ontario Teachers’ Pension Plan Board. The firm originally bought Flexera in 2008 from Macrovision for just $200 million. It turned it around just three years later in 2011 for $1 billion profit, according to reports.

While reports last year had the company’s investors looking for $3 billion, they didn’t quite reach that mark, but it’s still a hefty profit as the company continues to change hands, giving each of its owners a substantial return on investment.

At $2.85 billion, Thoma Bravo will have a bigger challenge on its hands to make that same kind of return, but it sees a company it liked before and it still likes it, especially the management team, which to some degree at least remains intact.

“Jim [Ryan] and his team have positioned Flexera for sustained growth by focusing on the strategic challenges enterprises face with complex IT infrastructures,” Seth Boro, managing partner at Thoma Bravo said in a statement.

Ryan was pleased to see the company’s value continue to rise and to connect once again with Thoma Bravo. “This is a resounding vote of confidence in the growth Flexera has shown and the strategic initiatives we’ve undertaken to address the exponential challenges faced by organizations today,” he said in a statement.

Flexera was founded in 2008 and has bought 12 companies along the way, including five in the last couple of years, according to Crunchbase data. The deal is expected to close in the first quarter of next subject to regulatory approvals.

Helping big banks out-Affirm Affirm and out-Chime Chime gives Amount a $681 million valuation

By Jonathan Shieber

Amount, a new service that helps traditional banks compete in a digital world, has raised $81 million from none other than Goldman Sachs as it looks to help legacy fintech players compete with their more nimble digital counterparts.

The company, which spun out from the startup lending company Avant in January of this year, has already inked deals with Banco Popular, HSBC, Regions Bank and TD Bank to power their digital banking services and offer products like point-of-sale lending to compete with challenger banks like Chime and lenders like Affirm or Klarna.

“Most banks are looking for resources and infrastructure to accelerate their digital strategy and meet the demands of today’s consumer,” said Jade Mandel, a Vice President in Goldman Sachs’ growth equity platform, GS Growth, who will be joining the Board of Directors at Amount, in a statement. “Amount enables banks to navigate digital transformation through its modular and mobile-first platform for financial products. We’re excited to partner with the team as they take on this compelling market opportunity.”

Complimenting those customer facing services is a deep expertise in fraud prevention on the back-end to help banks provide more loans with less risk than competitors, according to chief executive Adam Hughes.

It’s the combination of these three services that led Goldman to take point on a new $81 million investment in the company, with participation from previous investors August Capital, Invus Opportunities and Hanaco Ventures — giving Amount a post-money valuation of $681 million and bringing the company’s total capital raised in 2020 to a whopping $140 million.

Think of Amount as a white-labeled digital banking service provider for luddite banks that hadn’t upgraded their services to keep pace with demands of a new generation of customers or the COVID-19 era of digital-first services for everything.

Banks pay a pretty penny for access to Amount’s services. On top of a percentage for any loans that the bank process through Amount’s services, there’s an up-front implementation fee that typically averages at $1 million.

The hefty price tag is a sign of how concerned banks are about their digital challengers. Hughes said that they’ve seen a big uptick in adoption since the launch of their buy-now-pay-later product designed to compete with the fast growing startups like Affirm and Klarna .

Indeed, by offering banks these services, Amount gives Klarna and Affirm something to worry about. That’s because banks conceivably have a lower cost of capital than the startups and can offer better rates to borrowers. They also have the balance sheet capacity to approve more loans than either of the two upstart lenders.

 “Amount has the wind at its back and the industry is taking notice,” said Nigel Morris, the co-founder of CapitalOne and an investor in Amount through the firm QED Investors. “The latest round brings Amount’s total capital raised in 2020 to nearly $140M, which will provide for additional investments in platform research and development while accelerating the company’s go-to-market strategy. QED is thrilled to be a part of Amount’s story and we look forward to the company’s future success as it plays a vital role in the digitization of financial services.”

FT Partners served as advisor to Amount on this transaction.

Fylamynt raises $6.5M for its cloud workflow automation platform

By Frederic Lardinois

Fylamynt, a new service that helps businesses automate their cloud workflows, today announced both the official launch of its platform as well as a $6.5 million seed round. The funding round was led by Google’s AI-focused Gradient Ventures fund. Mango Capital and Point72 Ventures also participated.

At first glance, the idea behind Fylamynt may sound familiar. Workflow automation has become a pretty competitive space, after all, and the service helps developers connect their various cloud tools to create repeatable workflows. We’re not talking about your standard IFTTT- or Zapier -like integrations between SaaS products, though. The focus of Fylamynt is squarely on building infrastructure workflows. While that may sound familiar, too, with tools like Ansible and Terraform automating a lot of that already, Fylamynt sits on top of those and integrates with them.

Image Credits: Fylamynt

“Some time ago, we used to do Bash and scripting — and then [ … ] came Chef and Puppet in 2006, 2007. SaltStack, as well. Then Terraform and Ansible,” Fylamynt co-founder and CEO Pradeep Padala told me. “They have all done an extremely good job of making it easier to simplify infrastructure operations so you don’t have to write low-level code. You can write a slightly higher-level language. We are not replacing that. What we are doing is connecting that code.”

So if you have a Terraform template, an Ansible playbook and maybe a Python script, you can now use Fylamynt to connect those. In the end, Fylamynt becomes the orchestration engine to run all of your infrastructure code — and then allows you to connect all of that to the likes of DataDog, Splunk, PagerDuty Slack and ServiceNow.

Image Credits: Fylamynt

The service currently connects to Terraform, Ansible, Datadog, Jira, Slack, Instance, CloudWatch, CloudFormation and your Kubernetes clusters. The company notes that some of the standard use cases for its service are automated remediation, governance and compliance, as well as cost and performance management.

The company is already working with a number of design partners, including Snowflake.

Fylamynt CEO Padala has quite a bit of experience in the infrastructure space. He co-founded ContainerX, an early container-management platform, which later sold to Cisco. Before starting ContainerX, he was at VMWare and DOCOMO Labs. His co-founders, VP of Engineering Xiaoyun Zhu and CTO David Lee, also have deep expertise in building out cloud infrastructure and operating it.

“If you look at any company — any company building a product — let’s say a SaaS product, and they want to run their operations, infrastructure operations very efficiently,” Padala said. “But there are always challenges. You need a lot of people, it takes time. So what is the bottleneck? If you ask that question and dig deeper, you’ll find that there is one bottleneck for automation: that’s code. Someone has to write code to automate. Everything revolves around that.”

Fylamynt aims to take the effort out of that by allowing developers to either write Python and JSON to automate their workflows (think “infrastructure as code” but for workflows) or to use Fylamynt’s visual no-code drag-and-drop tool. As Padala noted, this gives developers a lot of flexibility in how they want to use the service. If you never want to see the Fylamynt UI, you can go about your merry coding ways, but chances are the UI will allow you to get everything done as well.

One area the team is currently focusing on — and will use the new funding for — is building out its analytics capabilities that can help developers debug their workflows. The service already provides log and audit trails, but the plan is to expand its AI capabilities to also recommend the right workflows based on the alerts you are getting.

“The eventual goal is to help people automate any service and connect any code. That’s the holy grail. And AI is an enabler in that,” Padala said.

Gradient Ventures partner Muzzammil “MZ” Zaveri echoed this. “Fylamynt is at the intersection of applied AI and workflow automation,” he said. “We’re excited to support the Fylamynt team in this uniquely positioned product with a deep bench of integrations and a nonprescriptive builder approach. The vision of automating every part of a cloud workflow is just the beginning.”

The team, which now includes about 20 employees, plans to use the new round of funding, which closed in September, to focus on its R&D, build out its product and expand its go-to-market team. On the product side, that specifically means building more connectors.

The company offers both a free plan as well as enterprise pricing and its platform is now generally available.

Salesforce announces new Service Cloud workforce planning tool

By Ron Miller

With a pandemic raging across many parts of the world, many companies have customer service agents spread out as well, creating a workforce management nightmare. It wasn’t easy to manage and route requests when CSAs were in one place, it’s even harder with many working from home.

To help answer that problem Salesforce is developing a new product called Service Cloud Workforce Engagement. Bill Patterson, EVP and General Manager for CRM Applications at Salesforce points out that with these workforces spread out, it’s a huge challenge for management to distribute work and keep up with customer volume, especially as customers have moved online during COVID.

“With Service Cloud Workforce Engagement, Salesforce will arm the contact center with a connected solution — all on one platform so our customers can remain resilient and agile no matter what tomorrow may bring,” Patterson said in a statement.

Like many Salesforce products, this one is made up of several key components to deliver a complete solution. For starters, there is Service Forecast for Customer 360, a tool that helps predict workforce requirements and uses AI to distribute customer service requests in a way that makes sense. This can help in planning at a time with a likely predictable uptick in service requests like Black Friday or Cyber Monday, or even those times when there is an unexpected spike.

Next up is Omnichannel Capacity Planning, which helps managers distribute CSAs across channels such as phone, messaging or email wherever they are needed most based on the demand across a given channel.

Finally, there is a teaching component that helps coach customer service agents to give the correct answer in the correct way for a given situation. “To increase agent engagement and performance, companies will be able to quickly onboard and continually train agents by delivering bite-size, guided learning paths directly in the agent’s workspace during their shift,” the company explained.

The company says that Service Cloud Workforce Engagement will be available in the first half of next year.

AWS announces Panorama, a device that adds machine learning technology to any camera

By Jonathan Shieber

AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.

Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.

Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.

Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.

Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.

As we wrote in 2018:

DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

 

Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.

And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.

Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.

 

AWS updates its edge computing solutions with new hardware and Local Zones

By Frederic Lardinois

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

AWS adds natural language search service for business intelligence from its data sets

By Jonathan Shieber

When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.

At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.

Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.

“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.

That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.

“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”

It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”

AWS launches SageMaker Data Wrangler, a new data preparation service for machine learning

By Frederic Lardinois

AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.

AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.

As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.

Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.

All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.

 

It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.

AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

By Jonathan Shieber

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

AWS launches Trainium, its new custom ML training chip

By Frederic Lardinois

At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.

It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.

New instances based on these custom chips will launch next year.

The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.

In addition, AWS is also partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training as well. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.

These new chips will make their debut in the AWS cloud in the first half of 2021.

Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.

Trainium, it’s worth noting, will use the same SDK as Inferentia.

“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.”

 

Uber officially completes Postmates acquisition

By Darrell Etherington

Uber today announced the official completion of its Postmates acquisition deal, which it announced originally back in July. The all-stock deal, valued at around $2.65 billion at the time of its disclosure, sees Postmates join Uber, while continuing to operate as a separate service with its own branding and front-end – while some backend operations, including a shared pool of drivers, will merge.

Uber detailed some of its further thinking around the newly combined companies and what that will mean for the businesses they work with in a new blog post. The company posited the move as of benefit to the merchant population they work with, and alongside the official closure announced a new initiative to encourage and gather customer feedback on the merchant side.

They’re calling it a “regional listening exercise” to be run beginning next year, wherein they’ll work with local restaurant associations and chambers of commerce to hear concerns from local business owners in their own communities. This sounds similar in design to Uber’s prior efforts to focus on driver feedback from a couple of years ago in order to improve the way it works with that side of its double-sided marketplace.

Focusing on the needs of its merchant population is doubly important given the current global pandemic, which has seen Uber Eats emerge as even more of a key infrastructure component in the food service and grocery industries as people seek more delivery options in order to better comply with stay-at-home orders and other public safety recommendations.

AWS brings the Mac mini to its cloud

By Frederic Lardinois

AWS today opened its re:Invent conference with a surprise announcement: the company is bringing the Mac mini to its cloud. These new EC2 Mac instances, as AWS calls them, are now available in preview. They won’t come cheap, though.

The target audience here — and the only one AWS is targeting for now — is developers who want cloud-based build and testing environments for their Mac and iOS apps. But it’s worth noting that with remote access, you get a fully-featured Mac mini in the cloud, and I’m sure developers will find all kinds of other use cases for this as well.

Given the recent launch of the M1 Mac minis, it’s worth pointing out that the hardware AWS is using — at least for the time being — are i7 machines with six physical and 12 logical cores and 32 GB of memory. Using the Mac’s built-in networking options, AWS connects them to its Nitro System for fast network and storage access. This means you’ll also be able to attach AWS block storage to these instances, for example.

Unsurprisingly, the AWS team is also working on bringing Apple’s new M1 Mac minis into its data centers. The current plan is to roll this out “early next year,” AWS tells me, and definitely within the first half of 2021. Both AWS and Apple believe that the need for Intel-powered machines won’t go away anytime soon, though, especially given that a lot of developers will want to continue to run their tests on Intel machines for the foreseeable future.

David Brown, AWS’s vice president of EC2, tells me that these are completely unmodified Mac minis. AWS only turned off Wi-Fi and Bluetooth. It helps, Brown said, that the minis fit nicely into a 1U rack.

“You can’t really stack them on shelves — you want to put them in some sort of service sled [and] it fits very well into a service sled and then our cards and all the various things we have to worry about, from an integration point of view, fit around it and just plug into the Mac mini through the ports that it provides,” Brown explained. He admitted that this was obviously a new challenge for AWS. The only way to offer this kind of service is to use Apple’s hardware, after all.

Image Credits: AWS

It’s also worth noting that AWS is not virtualizing the hardware. What you’re getting here is full access to your own device that you’re not sharing with anybody else. “We wanted to make sure that we support the Mac Mini that you would get if you went to the Apple store and you bought a Mac mini,” Brown said.

Unlike with other EC2 instances, whenever you spin up a new Mac instance, you have to pre-pay for the first 24 hours to get started. After those first 24 hours, prices are by the second, just like with any other instance type AWS offers today.

AWS will charge $1.083 per hour, billed by the second. That’s just under $26 to spin up a machine and run it for 24 hours. That’s quite a lot more than what some of the small Mac mini cloud providers are charging (we’re generally talking about $60 or less per month for their entry-level offerings and around two to three times as much for a comparable i7 machine with 32GB of RAM).

Image Credits: Ron Miller/TechCrunch

Until now, Mac mini hosting was a small niche in the hosting market, though it has its fair number of players, with the likes of MacStadium, MacinCloud, MacWeb and Mac Mini Vault vying for their share of the market.

With this new offering from AWS, they are now facing a formidable competitor, though they can still compete on price. AWS, however, argues that it can give developers access to all of the additional cloud services in its portfolio, which sets it apart from all of the smaller players.

“The speed that things happen at [other Mac mini cloud providers] and the granularity that you can use those services at is not as fine as you get with a large cloud provider like AWS,” Brown said. “So if you want to launch a machine, it takes a few days to provision and somebody puts a machine in a rack for you and gives you an IP address to get to it and you manage the OS. And normally, you’re paying for at least a month — or a longer period of time to get a discount. What we’ve done is you can literally launch these machines in minutes and have a working machine available to you. If you decide you want 100 of them, 500 of them, you just ask us for that and we’ll make them available. The other thing is the ecosystem. All those other 200-plus AWS services that you’re now able to utilize together with the Mac mini is the other big difference.”

Brown also stressed that Amazon makes it easy for developers to use different machine images, with the company currently offering images for macOS Mojave and Catalina, with Big Sure support coming “at some point in the future.” And developers can obviously create their own images with all of the software they need so they can reuse them whenever they spin up a new machine.

“Pretty much every one of our customers today has some need to support an Apple product and the Apple ecosystem, whether it’s iPhone, iPad or  Apple TV, whatever it might be. They’re looking for that bold use case,” Brown said. “And so the problem we’ve really been focused on solving is customers that say, ‘hey, I’ve moved all my server-side workloads to AWS, I’d love to be able to move some of these build workflows, because I still have some Mac minis in a data center or in my office that I have to maintain. I’d love that just to be on AWS.’ ”

AWS’s marquee launch customers for the new service are Intuit, Ring and mobile camera app FiLMiC.

“EC2 Mac instances, with their familiar EC2 interfaces and APIs, have enabled us to seamlessly migrate our existing iOS and macOS build-and-test pipelines to AWS, further improving developer productivity,” said Pratik Wadher, vice president of Product Development at Intuit. “We‘re experiencing up to 30% better performance over our data center infrastructure, thanks to elastic capacity expansion, and a high availability setup leveraging multiple zones. We’re now running around 80% of our production builds on EC2 Mac instances, and are excited to see what the future holds for AWS innovation in this space.”

The new Mac instances are now available in a number of AWS regions. These include US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Singapore), with other regions to follow soon.

As Slack acquisition rumors swirl, a look at Salesforce’s six biggest deals

By Ron Miller

The rumors ignited last Thursday that Salesforce had interest in Slack. This morning, CNBC is reporting the deal is all but done and will be announced tomorrow. Chances are, this is going to a big number, but this won’t be Salesforce’s first big acquisition. We thought it would be useful in light of these rumors to look back at the company’s biggest deals.

Salesforce has already surpassed $20 billion in annual revenue, and the company has a history of making a lot of deals to fill in the road map and give it more market lift as it searches for ever more revenue.

The biggest deal by far so far was the $15.7 billion Tableau acquisition last year. The deal gave Salesforce a missing data visualization component and a company with a huge existing market to feed the revenue beast. In an interview in August with TechCrunch, Salesforce president and chief operating officer Bret Taylor (who came to the company in the $750 million Quip deal in 2016), sees Tableau as a key part of the company’s growing success:

“Tableau is so strategic, both from a revenue and also from a technology strategy perspective,” he said. That’s because as companies make the shift to digital, it becomes more important than ever to help them visualize and understand that data in order to understand their customers’ requirements better.”

Next on the Salesforce acquisition hit parade was the $6.5 billion Mulesoft acquisition in 2018. Mulesoft gave Salesforce access to something it didn’t have as an enterprise SaaS company — data locked in silos across the company, even in on-prem applications. The CRM giant could leverage Mulesoft to access data wherever it lived, and when you put the two mega deals together, you could see how you could visualize that data and also give more fuel to its Einstein intelligence layer.

In 2016, the company spent $2.8 billion on Demandware to make a big splash in e-Commerce, a component of the platform that has grown in importance during the pandemic when companies large and small have been forced to move their businesses online. The company was incorporated into the Salesforce behemoth and became known as Commerce Cloud.

In 2013, the company made its first billion dollar acquisition when it bought ExactTarget for $2.5 billion. This represented the first foray into what would become the Marketing Cloud. The purchase gave the company entree into the targeted email marketing business, which again would grow increasingly in importance in 2020 when communicating with customers became crucial during the pandemic.

Last year, just days after closing the Mulesoft acquisition, Salesforce opened its wallet one more time and paid $1.35 billion for ClickSoftware. This one was a nod to the company’s Service cloud, which encompasses both customer service and field service. This acquisition was about the latter, and giving the company access to a bigger body of field service customers.

The final billion deal (until we hear about Slack perhaps) is the $1.33 billion Vlocity acquisition earlier this year. This one was a gift for the core CRM product. Vlocity gave Salesforce several vertical businesses built on the Salesforce platform and was a natural fit for the company. Using Vlocity’s platform, Salesforce could (and did) continue to build on these vertical markets giving it more ammo to sell into specialized markets.

While we can’t know for sure if the Slack deal will happen, it sure feels like it will, and chances are this deal will be even larger than Tableau as the Salesforce acquisition machine keeps chugging along.

Daily Crunch: Amazon Web Services stumble

By Anthony Ha

An Amazon Web Services outage has a wide effect, Salesforce might be buying Slack and Pinterest tests new support for virtual events. This is your Daily Crunch for November 25, 2020.

And for those of you who celebrate Thanksgiving: Enjoy! There will be no newsletter tomorrow, and then Darrell Etherington will be filling in for me on Friday.

The big story: Amazon Web Services stumble

Amazon Web Services began experiencing issues earlier today, which caused issues for sites and services that rely on its cloud infrastructure — as writer Zack Whittaker discovered when he tried to use his Roomba.

Amazon said the issue was largely localized to North America, and that it was working on a resolution. Meanwhile, a number of other companies, such as Adobe and Roku, have pointed to the AWS outage as the reason for their own service issues.

The tech giants

Slack’s stock climbs on possible Salesforce acquisition — News that Salesforce is interested in buying Slack sent shares of the smaller firm sharply higher today.

Pinterest tests online events with dedicated ‘class communities’ — The company has been spotted testing a new feature that allows users to sign up for Zoom classes through Pinterest.

France starts collecting tax on tech giants — This tax applies to companies that generate more than €750 million in revenue globally and €25 million in France, and that operate either a marketplace or an ad business.

Startups, funding and venture capital

Tiger Global invests in India’s Unacademy at $2B valuation — Unacademy helps students prepare for competitive exams to get into college.

WeGift, the ‘incentive marketing’ platform, collects $8M in new funding — Founded in 2016, WeGift wants to digitize the $700 billion rewards and incentives industry.

Cast.ai nabs $7.7M seed to remove barriers between public clouds — The company was started with the idea that developers should be able to get the best of each of the public clouds without being locked in.

Advice and analysis from Extra Crunch

Insurtech’s big year gets bigger as Metromile looks to go public — Metromile, a startup competing in the auto insurance market, is going public via SPAC.

Join us for a live Q&A with Sapphire’s Jai Das on Tuesday at 2 pm EST/11 am PST — Das has invested in companies like MuleSoft, Alteryx, Square and Sumo Logic.

(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

Gift Guide: Smart exercise gear to hunker down and get fit with — Smart exercise and health gear is smarter than ever.

Instead of yule log, watch this interactive dumpster fire because 2020 — Sure, why not.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Amazon Web Services outage takes a portion of the internet down with it

By Zack Whittaker

Amazon Web Services is currently having an outage, taking a large swathes of the internet down with it.

Several AWS services are down at the as of early Wednesday, according to its status pages. That means any app, site or service that relies on AWS might also be down, too. (As I found out the hard way when my Roomba refused to connect to the app.)

Amazon says the issue is largely localized to North America. The company didn’t give a reason for the outage, only that it was experiencing increased error rates and that it was working on a resolution.

So far a number of companies that rely on AWS have tweeted out that they’re experiencing issues as a result, including Adobe and Roku. We’ll keep you updated as this outage continues.

An Amazon AWS outage is currently impacting Adobe Spark so you may be having issues accessing/editing your projects. We are actively working with AWS and will report when the issue has subsided. https://t.co/uoHPf44HjL for current Spark status. We apologize for any inconvenience!

— Adobe Spark (@AdobeSpark) November 25, 2020

We are working to resolve this quickly. We are impacted by the widespread AWS outage and hope to get our customers up and running soon. Most streaming should work as expected during this time.

— Roku Support (@RokuSupport) November 25, 2020

We do apologize for the inconvenience! Unfortunately, the issue is stemming from an AWS server outage, which is affecting many companies. We hope that the issue is resolved soon!

— Shipt (@Shipt) November 25, 2020

HMBradley raises $18.25 million planting a flag as LA’s entrant into the challenger bank business

By Jonathan Shieber

With $90 million in deposits and $18.25 million in new financing, HMBradley is making moves as the Los Angeles-based entrant into the challenger bank competition.

LA is home to a growing community of financial services startups and HMBradley is quickly taking its place among the leaders with a novel twist on the banking business.

Unlike most banking startups that woo customers with easy credit and savvy online user interfaces, HMBradley is pitching a better savings account.

The company offers up to 3% interest on its savings accounts, much higher than most banks these days, and it’s that pitch that has won over consumers and investors alike, according to the company’s co-founder and chief executive, Zach Bruhnke.

With climbing numbers on the back of limited marketing, Bruhnke said raising the company’s latest round of financing was a breeze. 

“They knew after the first call that they wanted to do it,” Brunke said of the negotiations with the venture capital firm Acrew, a venture firm whose previous exposure to fintech companies included backing the challenger bank phenomenon which is Chime . “It was a very different kind of fundraise for us. Our seed round was a terrible, treacherous 16-month fundraise,” Brunke said.

For Acrew’s part, the company actually had to call Chime’s founder to ensure that the company was okay with the venture firm backing another entrant into the banking business. Once the approval was granted, Brunke said the deal was smooth sailing.

Acrew, Chime, and HMBradley’s founders see enough daylight between the two business models that investing in one wouldn’t be a conflict of interest with the other. And there’s plenty of space for new entrants in the banking business, Bruhnke said. “It’s a very, very large industry as a whole,” he said.

As the company grows its deposits, Bruhnke said there will be several ways it can leverage its capital. That includes commercial lending on the back end of HMBradley’s deposits and other financial services offerings to grow its base.

For now, it’s been wooing consumers with one click credit applications and the high interest rates it offers to its various tiers of savers.

“When customers hit that 3% tier they get really excited,” Bruhnke said. “If you’re saving money and you’re not saving to HMBradley then you’re losing money.”

The money that HMBradley raised will be used to continue rolling out its new credit product and hiring staff. It already poached the former director of engineering at Capital One, Ben Coffman, and fintech thought leader Saira Rahman, the company said. 

In October, the company said, deposits doubled month-over-month and transaction volume has grown to over $110 million since it launched in April. 

Since launching the company’s cash back credit card in July, HMBradley has been able to pitch customers on 3% cash back for its highest tier of savers — giving them the option to earn 3.5% on their deposits.

The deposit and lending capabilities the company offers are possible because of its partnership with the California-based Hatch Bank, the company said.

Mobile banking app Current raises $131M Series C, tops 2 million members

By Sarah Perez

U.S. challenger bank Current, which has doubled its member base in less than six months, announced this morning it raised $131 million in Series C funding, led by Tiger Global Management. The additional financing brings Current to over $180 million in total funding to date, and gives the company a valuation of $750 million.

The round also brought in new investors Sapphire Ventures and Avenir. Existing investors returned for the Series C, as well, including Foundation Capital, Wellington Management Company and QED.

Current began as a teen debit card controlled by parents, but expanded to offer personal checking accounts last year, using the same underlying banking technology. The service today competes with a range of mobile banking apps, offering features like free overdrafts, no minimum balance requirements, faster direct deposits, instant spending notifications, banking insights, check deposits using your phone’s camera and other now-standard baseline features for challenger banks.

In August 2020, Current debuted a points rewards program in an effort to better differentiate its service from the competition, which as of this month now includes Google Pay.

When Current raised its Series B last fall, it had over 500,000 accounts on its service. Today, it touts over 2 million members. Revenue has also grown, increasing by 500% year-over-year, the company noted today.

“We have seen a demonstrated need for access to affordable banking with a best-in-class mobile solution that Current is uniquely suited to provide,” said Current founder and CEO Stuart Sopp, in a statement about the fundraise. “We are committed to building products specifically to improve the financial outcomes of the millions of hard-working Americans who live paycheck to paycheck, and whose needs are not being properly served by traditional banks. With this new round of funding we will continue to expand on our mission, growth and innovation to find more ways to get members their money faster, help them spend it smarter and help close the financial inequality gap,” he added.

The additional funds will be used to further develop and expand Current’s mobile banking offerings, the company says.

Europe’s data strategy aims to tip the scales away from big tech

By Natasha Lomas

Google wants to organize the world’s information but European lawmakers are in a rush to organize the local digital sphere and make Europe “the most data-empowered continent in the world”, internal market commissioner Thierry Breton said today, setting out the thinking behind the bloc’s data strategy during a livestreamed discussion organized by the Brussels-based economic think tank, Bruegel.

Rebalancing big data power dynamics to tip the scales away from big tech is another stated aim.

Breton likened the EU’s ambitious push to encourage industrial data sharing and rebalance platform power to work done in the past to organize the region’s air space and other physical infrastructure — albeit, with a lot less time to get the job done given the blistering pace of digital innovation.

“This will require of course political vision — that we have — and willingness, that I believe we have too, and smart regulation, hopefully you will judge, to set the right rules and investment in key infrastructure,” said Breton.

During the talk, he gave a detailed overview of how the flotilla of legislative proposals which are being worked on by EU lawmakers will set rules intended to support European businesses and governments to safely unlock the value of industrial and public data and drive the next decades of economic growth.

“We have been brave enough to set our rules in the personal data sphere and this is what we need to do now for government and public and industrial data. Set the rules. The European rules. Everyone will be welcome in Europe, that’s extremely important — provided they respect our rules,” said Breton.

“We don’t have one minute to lose,” he added. “The battle for industrial data is starting now and the battlefield may be Europe so we need to get ready — and this is my objective.”

EU lawmakers are drafting rules for how (non-personal) data can be used and shared; who will get access to them; and how rights can be guaranteed under the framework, per Breton. And he argued that concerns raised by European privacy challenges to international data transfers — reflected in the recent Schrems II ruling — are not limited to privacy and personal data. 

“These worries are in fact at the heart of the Single Market for data that I am building,” he said. “These worries are clear in the world we are entering when individuals or companies want to keep control over its data. The key question is, therefore, how to organize this control while allowing data flow — which is extremely important in the data economy.”

An open single European market for data must recognize that not all data are the same — “in terms of their sensitivity” — Breton emphasized, pointing to the EU’s General Data Protection Regulation (GDPR) data protection framework as “the proof of that”.

“Going forward, there are also sensitive industrial data that should benefit from specific conditions when they are accessed, used or shared,” he went on. “This is a case for instance for some sensitive public data [such as] from public hospitals, but also anonymized data that remains sensitive, mixed data which are difficult to handle.”

At one point during the talk he gave the example of European hospitals during the pandemic not being able to share data across borders to help in the fight against the virus because of the lack of a purpose-built framework to securely enable such data flows.

“I want our SMEs and startups, our public hospitals, our cities and many other actors to use more data — to make them available, to value them, to share them — but for this we need to generate the trust,” he added.

The first legislative plank of the transformation to a single European data economy is a Data Governance Act (DGA) — which Breton said EU lawmakers will present tomorrow, after a vote on the proposal this afternoon.

“With this act we are defining a European approach to data sharing,” he noted on the DGA. “This new regulation will facilitate data sharing across sectors and Member States. And it will put those who generate the data in the driving seat — moving away from the current practices of the big tech platforms.

“Concretely, with this legislation, we create the conditions to allow access to a reuse of sensitive public data, creating a body of harmonized rules for the single market.”

A key component of building the necessary trust for the data economy will mean creating rules that state “European highly sensitive data should be able to be stored and processed in the EU”, Breton also said, signalling that data localization will be a core component of the strategy — in line with a number of recent public remarks in which he’s argued it’s not protectionist for European data to be stored in Europe. 

“Without such a possibility Member States will never agree to open their data hold,” Breton went on, saying that while Europe will be “open” with data, it will not be offering a “naive” data free-for-all.

The Commission also wants the data framework to support an ecosystem of data brokers whose role Breton said will be to connect data owners and data users “in a neutral manner” — suggesting this will empower companies to have stronger control over the data they generate, (i.e the implication being rather than the current situation where data-mining platform giants can use their market power to asset-strip weaker third parties).

“We are shifting here the product,” he said. “And we promote also data altruism — the role of sharing data, industrial or personal, for common good.”

Breton also noted that the forthcoming data governance proposal will include a shielding provision — meaning data actors will be required to take steps to avoid having to comply with what he called “abusive and unlawful” data access requests for data held in Europe from third countries.

“This is a major point. It is not a question of calling into question our international judicial or policy cooperation. We cannot tolerate abuses,” he said, specifying three off-limits examples (“unauthorized access; access that do offer sufficient legal guarantees; or fishing expeditions), adding: “By doing so we are ensuring that European law and the guarantees it carries is respected. This is about enforcing our own rules.”

Breton also touched on other interlocking elements of the policy strategy which regional lawmakers see as crucial to delivering a functional data framework: Namely the Digital Services Act (DSA) and Digital Markets Act (DMA) — which are both due to be set out in detail early next month.

The DSA will put “a clear responsibility and obligation on platforms and the content that is spread”, said Breton.

While the companion ex ante regulation, the DMA, will “frame the behaviours of gatekeepers — of systemic actors in the Single Market — and target their behaviors against their competitors or customers”; aka further helping to pin and clip the wings of big tech.

“With this set of regulation I just want to set up the rules and that the rules are clear — based on our values,” he added.

He also confirmed that interoperability and portability will be a key feature of the EU’s hoped for data transformation.

“We are working on this on several strands,” he said on this. “The first is standards for interoperability. That’s absolutely key for sectoral data spaces that we will create and very important for the data flows. You will see that we will create a European innovation data board — set in the DGA today — which will help the Commission in setting and working the right standards.”

While combating “blocking efforts and abusive behaviors” by platform gatekeepers — which could otherwise put an artificial limit on the value of the data economy — will be “the job of the DMA”, he noted.

A fourth pillar of the data strategy — which Breton referred to as a “data act” — will be introduced in 2021, with the aim of “increasing fairness in the data economy by clarifying data usage rights in business to business and business to government settings”.

“We will also consider enhanced data portability rights to give individuals more control — which is extremely important — over the data they produce,” he added. “And we will have a look at the intellectual property rights framework.”

He also noted that key infrastructure investments will be vital — pointing to the Commission’s plan to build a European industrial cloud and related strategic tech investment priorities such as in compute power capacity, building out next-gen connectivity and support for cutting edges technologies like quantum encryption.

Privacy campaigner Max Schrems, who had been invited as the other guest speaker, raised the issue of enforceability — pointing out that Ireland’s data protection authority, which is responsible for overseeing a large number of major tech companies in the region, still hasn’t issued any decisions on cross-border complaints filed under the 2.5 year old GDPR framework.

Breton agreed that enforcement will be a vital piece of the puzzle — claiming EU lawmakers are alive to the problem of enforcement “bottlenecks” in the GDPR.

“We need definitely clear, predictable, implementable rules — and this is what is driving me when I am regulating against the data market. But also what you will find behind the DSA and the DMA with an ex ante regulation to be able to apply it immediately and everywhere in Europe, not only in one country, everywhere at the same time,” he said. “Just to be able to make sure that things are happening quick. In this digital space we have to be fast.”

“So we will again make sure in DSA that Member State authorities can ask platforms to remove immediately content cross-border — like, for example, if you want an immediate comparison, the European Arrest Warrant.”

The Commission will also have the power to step in via cooperation at the European level, Breton further noted.

“So you see we are putting in rules, we are not naive, we understand pretty well where we have the bottleneck — and again we try to regulate. And also, in parallel, that’s very important because like everywhere where you have regulation you need to have sanctions — you will have appropriate sanctions,” he said, adding: “We learn the lessons from the GDPR.”

Looking to emulate Venmo, JoomPay preps a Euro launch for easy bill splitting and cash payments

By Mike Butcher

JoomPay, a startup with a similar product to PayPayl-owned Venmo in the US, is set to launch in Europe shortly after being granted a Luxembourg Electronic Money Institution (EMI) license. The app allows people to send and receive money with anyone, instantly and for free. “Venmo me” has become a common phrase in the US, where people use it to split bills in restaurants or similar. Venmo is in common use in the US, but it’s not available in Europe, although dozens of other innovative mobile peer to peer transfer options exist, such as Revolut, N26, Monese and Monzo. The waitlist for the app’s beta is open now.

Europe leads the world’s instant payments industry, with $18 trillion in worldwide volume predicted by 2025 up from $3 trillion in 2020 – a growth of over 500%. Western Europe – and COVID-19 – is now driving that innovation and will account for 38% of instant payment transaction value by 2025. While Europe lacks simple peer-to-peer payments solutions such as Venmo or Square Cash App in the US, challenger banks have stepped up to provide similar kinds of services. JoomPay’s opportunity lies in being able to be a middle-man between these various banking systems.

Shopping app Joom, which has been downloaded 150M times in Europe, has spun-off JoomPay to solve this problem. The app allows users to send and receive money from any person, regardless of whether they use JoomPay or not – and you only need to know their email or the phone number. JoomPay connects to any existing debit/credit card or a bank account. It also provides its users with a European IBAN and an optional free JoomPay card with cashback and bonuses.

Yuri Alekseev, CEO and co-founder of JoomPay, said: “Since COVID-19 started, we’ve seen a significant decline in cash usage. People can’t meet as easily as before but still need to send money, and we offer a viable alternative.”

JoomPay may have an uphill struggle. Its main competitors in Europe are the huge TransferWise, Paysend, and of course PayPal itself.

LA-based Boulevard raises $27 million for its spa management software

By Jonathan Shieber

Boulevard, a spa management and payment platform, has raised $27 million in a new round of funding despite a business slowdown caused by the COVID0-19 pandemic.

Founded four years ago by Matt Danna and Sean Stavropoulos, Boulevard was inspired by Stavropoulos’ inability to book a haircut and Danna’s hunch that the inability of salons and spas to cater to customers like the busy programmer could be indicative of a bigger problem.

The two spent months pounding the pavement in Los Angeles pretending to be college students doing research on the industry. They spoke with salon owners in Beverly Hills, Hollywood and other trendy neighborhoods trying to get a sense of where software and services were falling short.

Through those months of interviews the two developed the booking management and payment platform that would become Boulevard. The inspiration was one part Shopify and one part ServiceTitan, Danna said.

The idea was that Boulevard could build a pretty large business catering to the needs of a niche industry that hadn’t traditionally been exposed to a purpose-built toolkit for its vertical.

Investors including Index Ventures, Toba Capital, VMG Partners, Bonfire Ventures, Ludlow Ventures and BoxGroup agreed.

That could be because of the size of the industry. There is more than $250 billion spent per year across roughly 3 million businesses in the salon and spa category, according to data provided by the company. By comparison, fitness attracts roughly $34 billion in annual spending from 150,000 businesses.

“With limited access to the professionals that help us look and feel our best, I think the world has realized something that our team has always recognized: Salons and spas are more than a luxury, they are essential to our well-being,” said Danna, in a statement. “We are humbled that so many businesses are placing their trust in us during such a turbulent time. This new capital will help accelerate our mission and deliver value to salons and spas that they never imagined was possible from technology.”

According to data provided by the company, Boulevard is definitely giving businesses a boost. On average, businesses increase bookings by 16%, retail revenue jumps by 18% and gratuity paid out to stylists jumps by 24% for businesses that use Boulevard, the company said. It also reduces no-shows and cancellations, and halves time spent on the phone.  

“Boulevard is revitalizing the salon and spa industry, as evidenced by the company’s sustained 300-400% revenue growth over the last three years,” said Damir Becirovic of Index Ventures, whose firm led the company’s Series A round and has doubled down with the new capital infusion. 

Customers using the company’s software include: Chris McMillan the Salon, Heyday, MèCHE Salon, Paintbox, Sassoon Salon, SEV Laser, Spoke & Weal and TONI&GUY.

Boulevard now has 90 employees and will look to increase that number as it continues to expand across the country.

Investors have taken a run at the spa market in the past, with company’s like MindBody valued at over $1 billion for its software services. Indeed, that company was taken private two years ago in a $1.9 billion transaction by Vista Equity Partners.

As Boulevard expands, the company may look to get deeper into financial services for the salons and spas that it’s already working with. Given the company’s window into these businesses’ financing, it’s not impossible to imagine a new line of business providing small business loans to these companies.

It’s something that the founders would likely not rule out. And it’s a way to provide more tools to entrepreneurs that often fall outside of the traditional sweet spot for banks and other lenders, Danna said.

 

❌