Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.
As machine learning moves into the mainstream, business units across organizations will find applications for automation, and AWS is trying to make the development of those bespoke applications easier for its customers.
“One of the best parts of having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability, and automation at scale.”
Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile, and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.
The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains over 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.
Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.
Another new tool that Amazon Web Services touted was its workflow management and automation toolkit, Pipelines. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.
To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye towards better transparency on how models were set up. There are open source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.
Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables to developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data cross multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.
Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.
As companies rely increasingly on machine learning models to run their businesses, it’s imperative to include anti-bias measures to ensure these models are not making false or misleading assumptions. Today at AWS re:Invent, AWS introduced Amazon SageMaker Clarify to help reduce bias in machine learning models.
“We are launching Amazon SageMaker Clarify. And what that does is it allows you to have insight into your data and models throughout your machine learning lifecycle,” Bratin Saha, Amazon VP and general manager of machine learning told TechCrunch.
He says that it is designed to analyze the data for bias before you start data prep, so you can find these kinds of problems before you even start building your model.
“Once I have my training data set, I can [look at things like if I have] an equal number of various classes, like do I have equal numbers of males and females or do I have equal numbers of other kinds of classes, and we have a set of several metrics that you can use for the statistical analysis so you get real insight into easier data set balance,” Saha explained.
After you build your model, you can run SageMaker Clarify again to look for similar factors that might have crept into your model as you built it. “So you start off by doing statistical bias analysis on your data, and then post training you can again do analysis on the model,” he said.
There are multiple types of bias that can enter a model due to the background of the data scientists building the model, the nature of the data and how they data scientists interpret that data through the model they built. While this can be problematic in general it can also lead to racial stereotypes being extended to algorithms. As an example, facial recognition systems have proven quite accurate at identifying white faces, but much less so when it comes to recognizing people of color.
It may be difficult to identify these kinds of biases with software as it often has to do with team makeup and other factors outside the purview of a software analysis tool, but Saha says they are trying to make that software approach as comprehensive as possible.
“If you look at SageMaker Clarify it gives you data bias analysis, it gives you model bias analysis, it gives you model explainability it gives you per inference explainability it gives you a global explainability,” Saha said.
Saha says that Amazon is aware of the bias problem and that is why it created this tool to help, but he recognizes that this tool alone won’t eliminate all of the bias issues that can crop up in machine learning models, and they offer other ways to help too.
“We are also working with our customers in various ways. So we have documentation, best practices, and we point our customers to how to be able to architect their systems and work with the system so they get the desired results,” he said.
SageMaker Clarify is available starting to day in multiple regions.
AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.
Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.
Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.
Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.
Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.
As we wrote in 2018:
DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.
Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.
And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.
Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.
One of the areas that is often left behind when it comes to cloud computing is the industrial sector. That’s because these facilities often have older equipment or proprietary systems that aren’t well suited to the cloud. Amazon wants to change that, and today the company announced a slew of new services at AWS re:Invent aimed at helping the industrial sector understand their equipment and environments better.
For starters, the company announced Amazon Monitron, which is designed to monitor equipment and send signals to the engineering team when the equipment could be breaking down. If industrial companies can know when their equipment is breaking, it allows them to repair on it their own terms, rather than waiting until after it breaks down and having the equipment down at what could be an inopportune time.
As AWS CEO Andy Jassy says, an experienced engineer will know when equipment is breaking down by a certain change in sound or a vibration, but if the machine could tell you even before it got that far, it would be a huge boost to these teams.
“…a lot of companies either don’t have sensors, they’re not modern powerful sensors, or they are not consistent and they don’t know how to take that data from the sensors and send it to the cloud, and they don’t know how to build machine learning models, and our manufacturing companies we work with are asking [us] just solve this [and] build an end-to-end solution. So I’m excited to announce today the launch of Amazon Monotron, which is an end-to-end solution for equipment monitoring,” Jassy said.
The company builds a machine learning model that understands what a normal state looks like, then uses that information to find anomalies and send back information to the team in a mobile app about equipment that needs maintenance now based on the data the model is seeing.
For those companies who may have a more modern system and don’t need the complete package that Monotron offers, Amazon has something for these customers as well. If you have modern sensors, but you don’t have a sophisticated machine learning model, Amazon can ingest this data and apply its machine learning algorithms to find anomalies just as it can with Monotron.
“So we have something for this group of customers as well to announce today, which is the launch of Amazon Lookout for Equipment, which does anomaly detection for industrial machinery,” he said.
In addition, the company announced the Panorama Appliance for companies using cameras at the edge who want to use more sophisticated computer vision, but might not have the most modern equipment to do that. “I’m excited to announce today the launch of the AWS Panorama Appliance which is a new hardware appliance [that allows] organizations to add computer vision to existing on premises smart cameras,” Jassy told AWS re:Invent today.
In addition, it also announced a Panorama SDK to help hardware vendors build smarter cameras based on Panorama.
All of these services are designed to give industrial companies access to sophisticated cloud and machine learning technology at whatever level they may require depending on where they are on the technology journey.
AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.
In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.
The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.
As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.
The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.
With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.
When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.
At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.
Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.
“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.
That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.
“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”
It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”
At AWS re:Invent today, Andy Jassy announced DevOps Guru, a new tool for DevOps teams to help the operations side find issues that could be having an impact on an application performance. Consider it like the sibling of CodeGuru, the service the company announced last year to find issues in your code before you deploy.
It works in a similar fashion using machine learning to find issues on the operations side of the equation. “I’m excited to launch a new service today called Amazon DevOps Guru, which is a new service that uses machine learning to identify operational issues long before they impact customers,” Jassy said today.
The way it works is that it collects and analyzes data from application metrics, logs, and events “to identify behavior that deviates from normal operational patterns,” the company explained in the blog post announcing the new service.
This service essentially gives AWS a product that would be competing with companies like Sumo Logic, DataDog or Splunk by providing deep operational insight on problems that could be having an impact on your application such as misconfigurations or resources that are over capacity.
When it finds a problem, the service can send an SMS, Slack message or other communication to the team and provides recommendations on how to fix the problem as quickly as possible.
What’s more, you pay for the data analyzed by the service, rather than a monthly fee. The company says this means that there is no upfront cost or commitment involved.
AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.
AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.
As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.
Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.
All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.
It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.
AWS announced some big updates to its Lambda serverless function service today. For starters, starting today it will be able to deliver functions with up to 10MB of memory and 6 vCPUs (virtual CPUs). This will allow developers building more compute-intensive functions to get the resources they need.
“Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment,” the company wrote in a blog post announcing the new capabilities.
Serverless computing doesn’t mean there are no servers. It means that developers no longer have to worry about the compute, storage and memory requirements because the cloud provider — in this case, AWS — takes care of it for them, freeing them up to just code the application instead of deploying resources.
Today’s announcement combined with support for support for the AVX2 instruction set, means that developers can use this approach with more sophisticated technologies like machine learning, gaming and even high performance computing.
One of the beauties of this approach is that in theory you can save money because you aren’t paying for resources you aren’t using. You are only paying each time the application requires a set of resources and no more. To make this an even bigger advantage, the company also announced, “Starting today, we are rounding up duration to the nearest millisecond with no minimum execution time,” the company announced in a blog post on the new pricing approach.
Finally the company also announced container image support for Lambda functions. “To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size. In this way, you can also easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads,” the company wrote in a blog post announcing the new capability.
All of these announcements in combination mean that you can now use Lambda functions for more intensive operations than you could previously, and the new billing approach should lower your overall spending as you make that transition to the new capabilities.
AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.
What Babelfish does is provide a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will (though they’ll still have to migrate their existing data). It provides translations for the dialect, but also SQL commands, cursors, catalog views, data types, triggers, stored procedures and functions.
The promise here is that companies won’t have to replace their database drivers or rewrite and verify their database requests to make this transition.
“We believe Babelfish stands out because it’s not another migration service, as useful as those can be. Babelfish enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements,” AWS’s Matt Asay writes in today’s announcement. “This means much faster ‘migrations’ with minimal developer effort. It’s also centered on ‘correctness,’ meaning applications designed to use SQL Server functionality will behave the same on PostgreSQL as they would on SQL Server.”
PostgreSQL, AWS rightly points out, is one of the most popular open-source databases in the market today. A lot of companies want to migrate their relational databases to it — or at least use it in conjunction with their existing databases. This new service is going to make that significantly easier.
The open-source Babelfish project will launch in 2021 and will be available on GitHub under the Apache 2.0 license.
“It’s still true that the overwhelming majority of relational databases are on-premise,” AWS CEO Andy Jassy said. “Customers are fed up with and sick of incumbents.” As is tradition at re:Invent, Jassy also got a few swipes at Oracle into his keynote, but the real target of the products the company is launching in the database area today is clearly Microsoft.
Today at AWS re:Invent, Andy Jassy talked a lot about how companies are making a big push to the cloud, but today’s container-focussed announcements gave a big nod to the data center as the company announced ECS Anywhere and EKS Anywhere, both designed to let you run these services on-premises, as well as in the cloud.
These two services, ECS for generalized container orchestration and EKS for that’s focused on Kubernetes will let customers use these popular AWS services on premises. Jassy said that some customers still want the same tools they use in the cloud on prem and this is designed to give it to them.
Speaking of ECS he said, “I still have a lot of my containers that I need to run on premises as I’m making this transition to the cloud, and [these] people really want it to have the same management and deployment mechanisms that they have in AWS also on premises and customers have asked us to work on this. And so I’m excited to announce two new things to you. The first is the launch, or the announcement of Amazon ECS anywhere, which lets you run ECS and your own data center,” he told the re:Invent audience.
Image Credits: AWS
He said it gives you the same AWS API’s and cluster configuration management pieces. This will work the same for EKS, allowing this single management methodology regardless of where you are using the service.
While it was at it, the company also announced it was open sourcing EKS, its own managed Kubernetes service. The idea behind these moves is to give customers as much flexibility as possible, and recognizing what Microsoft, IBM and Google have been saying, that we live in a multi-cloud and hybrid world and people aren’t moving everything to the cloud right away.
In fact, in his opening Jassy stated that right now in 2020, just 4% of worldwide IT spend is on the cloud. That means there’s money to be made selling services on premises, and that’s what these services will do.
At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.
It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.
New instances based on these custom chips will launch next year.
The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.
In addition, AWS is also partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training as well. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.
These new chips will make their debut in the AWS cloud in the first half of 2021.
Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.
Trainium, it’s worth noting, will use the same SDK as Inferentia.
“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.”
AWS today opened its re:Invent conference with a surprise announcement: the company is bringing the Mac mini to its cloud. These new EC2 Mac instances, as AWS calls them, are now available in preview. They won’t come cheap, though.
The target audience here — and the only one AWS is targeting for now — is developers who want cloud-based build and testing environments for their Mac and iOS apps. But it’s worth noting that with remote access, you get a fully-featured Mac mini in the cloud, and I’m sure developers will find all kinds of other use cases for this as well.
Given the recent launch of the M1 Mac minis, it’s worth pointing out that the hardware AWS is using — at least for the time being — are i7 machines with six physical and 12 logical cores and 32 GB of memory. Using the Mac’s built-in networking options, AWS connects them to its Nitro System for fast network and storage access. This means you’ll also be able to attach AWS block storage to these instances, for example.
Unsurprisingly, the AWS team is also working on bringing Apple’s new M1 Mac minis into its data centers. The current plan is to roll this out “early next year,” AWS tells me, and definitely within the first half of 2021. Both AWS and Apple believe that the need for Intel-powered machines won’t go away anytime soon, though, especially given that a lot of developers will want to continue to run their tests on Intel machines for the foreseeable future.
David Brown, AWS’s vice president of EC2, tells me that these are completely unmodified Mac minis. AWS only turned off Wi-Fi and Bluetooth. It helps, Brown said, that the minis fit nicely into a 1U rack.
“You can’t really stack them on shelves — you want to put them in some sort of service sled [and] it fits very well into a service sled and then our cards and all the various things we have to worry about, from an integration point of view, fit around it and just plug into the Mac mini through the ports that it provides,” Brown explained. He admitted that this was obviously a new challenge for AWS. The only way to offer this kind of service is to use Apple’s hardware, after all.
It’s also worth noting that AWS is not virtualizing the hardware. What you’re getting here is full access to your own device that you’re not sharing with anybody else. “We wanted to make sure that we support the Mac Mini that you would get if you went to the Apple store and you bought a Mac mini,” Brown said.
Unlike with other EC2 instances, whenever you spin up a new Mac instance, you have to pre-pay for the first 24 hours to get started. After those first 24 hours, prices are by the second, just like with any other instance type AWS offers today.
AWS will charge $1.083 per hour, billed by the second. That’s just under $26 to spin up a machine and run it for 24 hours. That’s quite a lot more than what some of the small Mac mini cloud providers are charging (we’re generally talking about $60 or less per month for their entry-level offerings and around two to three times as much for a comparable i7 machine with 32GB of RAM).
Until now, Mac mini hosting was a small niche in the hosting market, though it has its fair number of players, with the likes of MacStadium, MacinCloud, MacWeb and Mac Mini Vault vying for their share of the market.
With this new offering from AWS, they are now facing a formidable competitor, though they can still compete on price. AWS, however, argues that it can give developers access to all of the additional cloud services in its portfolio, which sets it apart from all of the smaller players.
“The speed that things happen at [other Mac mini cloud providers] and the granularity that you can use those services at is not as fine as you get with a large cloud provider like AWS,” Brown said. “So if you want to launch a machine, it takes a few days to provision and somebody puts a machine in a rack for you and gives you an IP address to get to it and you manage the OS. And normally, you’re paying for at least a month — or a longer period of time to get a discount. What we’ve done is you can literally launch these machines in minutes and have a working machine available to you. If you decide you want 100 of them, 500 of them, you just ask us for that and we’ll make them available. The other thing is the ecosystem. All those other 200-plus AWS services that you’re now able to utilize together with the Mac mini is the other big difference.”
Brown also stressed that Amazon makes it easy for developers to use different machine images, with the company currently offering images for macOS Mojave and Catalina, with Big Sure support coming “at some point in the future.” And developers can obviously create their own images with all of the software they need so they can reuse them whenever they spin up a new machine.
“Pretty much every one of our customers today has some need to support an Apple product and the Apple ecosystem, whether it’s iPhone, iPad or Apple TV, whatever it might be. They’re looking for that bold use case,” Brown said. “And so the problem we’ve really been focused on solving is customers that say, ‘hey, I’ve moved all my server-side workloads to AWS, I’d love to be able to move some of these build workflows, because I still have some Mac minis in a data center or in my office that I have to maintain. I’d love that just to be on AWS.’ ”
AWS’s marquee launch customers for the new service are Intuit, Ring and mobile camera app FiLMiC.
“EC2 Mac instances, with their familiar EC2 interfaces and APIs, have enabled us to seamlessly migrate our existing iOS and macOS build-and-test pipelines to AWS, further improving developer productivity,” said Pratik Wadher, vice president of Product Development at Intuit. “We‘re experiencing up to 30% better performance over our data center infrastructure, thanks to elastic capacity expansion, and a high availability setup leveraging multiple zones. We’re now running around 80% of our production builds on EC2 Mac instances, and are excited to see what the future holds for AWS innovation in this space.”
The new Mac instances are now available in a number of AWS regions. These include US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Singapore), with other regions to follow soon.