FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Cloudflare introduces free digital waiting rooms for any organizations distributing COVID-19 vaccines

By Darrell Etherington

Web infrastructure company Cloudflare is releasing a new tool today that aims to provide a way for health agencies and organizations globally tasked with rolling out COVID-19 vaccines to maintain a fair, equitable and transparent digital queue – completely free of charge. The company’s ‘Project Fair Shot’ initiative will make its new Cloudflare Waiting Room offering free to any organization that qualifies, essentially providing a way from future vaccine recipients to register and gain access to a clear and constantly-updated view of where they are in line to receive the preventative treatment.

“The wife of one of Cloudflare’s executives in our Austin was trying to register her parents for the COVID-19 vaccine program there,” explained Cloudflare CEO Matthew Prince via email. “The registration site kept crashing. She said to her husband: why doesn’t Cloudflare build a queuing feature to help vaccine sites? As it happened, we had exactly such a feature under development and scheduled to be launched in early February.”

After realizing the urgency of the need for something like this tool to help alleviate the many infrastructure challenges that come up when you’re trying to vaccinate a global population against a viral threat as quickly as possible, Cloudflare changed their release timetable and devoted additional resources to the project.

“We talked to the team about moving up the scheduled launch of our Waiting Room feature,” Prince added. “They worked around the clock because they recognized how important helping with vaccine delivery was. These are the sorts of projects that really drive our team: when we can use our technical expertise and infrastructure to solve problems with broad, positive impact.”

On the technical side, Cloudflare Waiting Room is simple to implement, according to the company, and can be added to any registration website built on the company’s existing content delivery network without any engineering or coding knowledge required. Visitors to the site can register and will receive a confirmation that they’re in line, and then will receive a follow-up directing them to a sign-up page for the organization administering their vaccine when it’s their turn. Further configuration options allow Waiting Room operators to offer wait time estimates to registrants, as well as provide additional alerts when their turn is nearing (though that functionality is coming in a future update).

As Prince mentioned, Waiting Room was already on Cloudflare’s project roadmap, and was actually intended for other high-demand, limited supply allocation items: Think must-have concert tickets, or the latest hot sneaker release. But the Fair Shot program will provide it totally free to those organizations that need it, whereas that would’ve been a commercial product. Interested parties can sign up at Cloudflare’s registration page to get on the waitlist for availability.

“With Project Fair Shot we stand ready to help ensure everyone who is eligible can get equitable access to the COVID-19 vaccines and we, along with the rest of humanity, look forward to putting this disease behind us,” Prince explained.

Cloud infrastructure startup CloudNatix gets $4.5 million seed round led by DNX Ventures

By Catherine Shu

CloudNatix founder and chief executive officer Rohit Seth

CloudNatix founder and chief executive officer Rohit Seth. Image Credits: CloudNatix

CloudNatix, a startup that provides infrastructure for businesses with multiple cloud and on-premise operations, announced it has raised $4.5 million in seed funding. The round was led by DNX Ventures, an investment firm that focuses on United States and Japanese B2B startups, with participation from Cota Capital. Existing investors Incubate Fund, Vela Partners and 468 Capital also contributed.

The company also added DNX Ventures managing partner Hiro Rio Maeda to its board of directors.

CloudNatix was founded in 2018 by chief executive officer Rohit Seth, who previously held lead engineering roles at Google. The company’s platform helps businesses reduce IT costs by analyzing their infrastructure spending and then using automation to make IT operations across multiple clouds more efficient. The company’s typical customer spends between $500,000 to $50 million on infrastructure each year, and use at least one cloud service provider in addition on-premise networks.

Built on open-source software like Kubernetes and Prometheus, CloudNatix works with all major cloud providers and on-premise networks. For DevOps teams, it helps configure and manage infrastructure that runs both legacy and modern cloud-native applications, and enables them to transition more easily from on-premise networks to cloud services.

CloudNatix competes most directly with VMWare and Red Hat OpenShift. But both of those services are limited to their base platforms, while CloudNatix’s advantage is that is agnostic to base platforms and cloud service providers, Seth told TechCrunch.

The company’s seed round will be used to scale its engineering, customer support and sales teams.

 

An argument against cloud-based applications

By Walter Thompson
Michael Huth Contributor
Professor Michael Huth (Ph.D.) is co-founder and CTO of Xayn and teaches at Imperial College London. His research focuses on cybersecurity, cryptography and mathematical modeling, as well as security and privacy in machine learning.

In the last decade we’ve seen massive changes in how we consume and interact with our world. The Yellow Pages is a concept that has to be meticulously explained with an impertinent scoff at our own age. We live within our smartphones, within our apps.

While we thrive with the information of the world at our fingertips, we casually throw away any semblance of privacy in exchange for the convenience of this world.

This line we straddle has been drawn with recklessness and calculation by big tech companies over the years as we’ve come to terms with what app manufacturers, large technology companies, and app stores demand of us.

Our private data into the cloud

According to Symantec, 89% of our Android apps and 39% of our iOS apps require access to private information. This risky use sends our data to cloud servers, to both amplify the performance of the application (think about the data needed for fitness apps) and store data for advertising demographics.

While large data companies would argue that data is not held for long, or not used in a nefarious manner, when we use the apps on our phones, we create an undeniable data trail. Companies generally keep data on the move, and servers around the world are constantly keeping data flowing, further away from its source.

Once we accept the terms and conditions we rarely read, our private data is no longer such. It is in the cloud, a term which has eluded concrete understanding throughout the years.

A distinction between cloud-based apps and cloud computing must be addressed. Cloud computing at an enterprise level, while argued against ad nauseam over the years, is generally considered to be a secure and cost-effective option for many businesses.

Even back in 2010, Microsoft said 70% of its team was working on things that were cloud-based or cloud-inspired, and the company projected that number would rise to 90% within a year. That was before we started relying on the cloud to store our most personal, private data.

Cloudy with a chance of confusion

To add complexity to this issue, there are literally apps to protect your privacy from other apps on your smart phone. Tearing more meat off the privacy bone, these apps themselves require a level of access that would generally raise eyebrows if it were any other category of app.

Consider the scenario where you use a key to encrypt data, but then you need to encrypt that key to make it safe. Ultimately, you end up with the most important keys not being encrypted. There is no win-win here. There is only finding a middle ground of contentment in which your apps find as much purchase in your private data as your doctor finds in your medical history.

The cloud is not tangible, nor is it something we as givers of the data can access. Each company has its own cloud servers, each one collecting similar data. But we have to consider why we give up this data. What are we getting in return? We are given access to applications that perhaps make our lives easier or better, but essentially are a service. It’s this service end of the transaction that must be altered.

App developers have to find a method of service delivery that does not require storage of personal data. There are two sides to this. The first is creating algorithms that can function on a local basis, rather than centralized and mixed with other data sets. The second is a shift in the general attitude of the industry, one in which free services are provided for the cost of your personal data (which ultimately is used to foster marketing opportunities).

Of course, asking this of any big data company that thrives on its data collection and marketing process is untenable. So the change has to come from new companies, willing to risk offering cloud privacy while still providing a service worth paying for. Because it wouldn’t be free. It cannot be free, as free is what got us into this situation in the first place.

Clearing the clouds of future privacy

What we can do right now is at least take a stance of personal vigilance. While there is some personal data that we cannot stem the flow of onto cloud servers around the world, we can at least limit the use of frivolous apps that collect too much data. For instance, games should never need access to our contacts, to our camera and so on. Everything within our phone is connected, it’s why Facebook seems to know everything about us, down to what’s in our bank account.

This sharing takes place on our phone and at the cloud level, and is something we need to consider when accepting the terms on a new app. When we sign into apps with our social accounts, we are just assisting the further collection of our data.

The cloud isn’t some omnipotent enemy here, but it is the excuse and tool that allows the mass collection of our personal data.

The future is likely one in which devices and apps finally become self-sufficient and localized, enabling users to maintain control of their data. The way we access apps and data in the cloud will change as well, as we’ll demand a functional process that forces a methodology change in service provisions. The cloud will be relegated to public data storage, leaving our private data on our devices where it belongs. We have to collectively push for this change, lest we lose whatever semblance of privacy in our data we have left.

Stacklet raises $18M for its cloud governance platform

By Frederic Lardinois

Stacklet, a startup that is commercializing the Cloud Custodian open-source cloud governance project, today announced that it has raised an $18 million Series A funding round. The round was led by Addition, with participation from Foundation Capital and new individual investor Liam Randall, who is joining the company as VP of business development. Addition and Foundation Capital also invested in Stacklet’s seed round, which the company announced last August. This new round brings the company’s total funding to $22 million.

Stacklet helps enterprises manage their data governance stance across different clouds, accounts, policies and regions, with a focus on security, cost optimization and regulatory compliance. The service offers its users a set of pre-defined policy packs that encode best practices for access to cloud resources, though users can obviously also specify their own rules. In addition, Stacklet offers a number of analytics functions around policy health and resource auditing, as well as a real-time inventory and change management logs for a company’s cloud assets.

The company was co-founded by Travis Stanfield (CEO) and Kapil Thangavelu (CTO). Both bring a lot of industry expertise to the table. Stanfield spent time as an engineer at Microsoft and leading DealerTrack Technologies, while Thangavelu worked at Canonical and most recently in Amazon’s AWSOpen team. Thangavelu is also one of the co-creators of the Cloud Custodian project, which was first incubated at Capital One, where the two co-founders met during their time there, and is now a sandbox project under the Cloud Native Computing Foundation’s umbrella.

“When I joined Capital One, they had made the executive decision to go all-in on cloud and close their data centers,” Thangavelu told me. “I got to join on the ground floor of that movement and Custodian was born as a side project, looking at some of the governance and security needs that large regulated enterprises have as they move into the cloud.”

As companies have sped up their move to the cloud during the pandemic, the need for products like Stacklets has also increased. The company isn’t naming most of its customers, but one of them is FICO, among a number of other larger enterprises. Stacklet isn’t purely focused on the enterprise, though. “Once the cloud infrastructure becomes — for a particular organization — large enough that it’s not knowable in a single person’s head, we can deliver value for you at that time and certainly, whether it’s through the open source or through Stacklet, we will have a story there.” The Cloud Custodian open-source project is already seeing serious use among large enterprises, though, and Stacklet obviously benefits from that as well.

“In just 8 months, Travis and Kapil have gone from an idea to a functioning team with 15 employees, signed early Fortune 2000 design partners and are well on their way to building the Stacklet commercial platform,” Foundation Capital’s Sid Trivedi said. “They’ve done all this while sheltered in place at home during a once-in-a-lifetime global pandemic. This is the type of velocity that investors look for from an early-stage company.”

Looking ahead, the team plans to use the new funding to continue to developed the product, which should be generally available later this year, expand both its engineering and its go-to-market teams and continue to grow the open-source community around Cloud Custodian.

Is there still room in the cloud-security market?

By Walter Thompson
Kelley Mak Contributor
Kelley Mak is a principal at Work-Bench, where he focuses on early-stage enterprise technology investments in areas including security, cloud and developer tools.

While the initial shock of the COVID-19 pandemic has subsided for businesses, one of its main legacies is how it ushered in a tidal wave of accelerated digital transformation.

A recent Twilio survey revealed that 97% of global enterprise decision-makers believe the pandemic sped up their company’s digital transformation, and on top of that, 79% of the respondents said that COVID-19 increased the budget for digital transformation.

As technology becomes the driving force of competitive differentiation, cloud plays a key role in making this a reality and impacts everything from data and analytics to the modern workplace. Cloud-based infrastructure promises more flexibility, scale and cost-effectiveness, as well as enables enterprises to have more agile application development and keep up with service demand.

What’s clear is that despite shortfalls in security, innovation in cloud and infrastructure will charge ahead.

Even with all of the hype and excitement around cloud’s potential, it is still early days. In his recent keynote at AWS re:Invent, the AWS CEO Andy Jassy mentioned that spending on cloud computing is still only 4% of the overall IT market. And a Barclays CIO survey found that enterprises have 30% of their workloads running in the public cloud, with the expectation to increase to 39% in 2021.

It’s become clear that the movement to cloud has its barriers and that large enterprises are often skittish to make the jump. Flexera’s State of the Cloud 2020 report outlined some of these top cloud challenges, citing security as #1. This has been widely apparent in conversations that I’ve had with Fortune 500 CISOs and security teams, who are wary of the shift from their current state of security operations. Some of the major concerns brought up include:

  • No longer your own master. When working with the public cloud providers, companies must relinquish control to some aspects of back-end management. This is tough for large enterprises who have a history of customizing products because you can’t completely tailor the environment to your liking and are limited to what’s on the cloud service provider’s platform.
  • Lack of standardization. Each cloud provider has their own solutions and own intricacies. Add to that other pitfalls, like an unknown cadence of updates, there is an opaqueness to interoperability and policies can’t be uniformly applied across environments.
  • Requires a new skill set. Lack of resources/expertise ranks among the top challenges for enterprises. A recent report on challenges in cloud transformation found that 86% of IT decision-makers believe shortage of talent will slow down 2020 cloud projects.

Vantage makes managing AWS easier

By Frederic Lardinois

Vantage, a new service that makes managing AWS resources and their associated spend easier, is coming out of stealth today. The service offers its users an alternative to the complex AWS console with support for most of the standard AWS services, including EC2 instances, S3 buckets, VPCs, ECS and Fargate and Route 53 hosted zones.

The company’s founder, Ben Schaechter, previously worked at AWS and Digital Ocean (and before that, he worked on Crunchbase, too). Yet while DigitalOcean showed him how to build a developer experience for individuals and small businesses, he argues that the underlying services and hardware simply weren’t as robust as those of the hyperclouds. AWS, on the other hand, offers everything a developer could want (and likely more), but the user experience leaves a lot to be desired.

Image Credits: Vantage

“The idea was really born out of ‘what if we could take the user experience of DigitalOcean and apply it to the three public cloud providers, AWS, GCP and Azure,” Schaechter told me. “We decided to start just with AWS because the experience there is the roughest and it’s the largest player in the market. And I really think that we can provide a lot of value there before we do GCP and Azure.”

The focus for Vantage is on the developer experience and cost transparency. Schaechter noted that some of its users describe it as being akin to a “Mint for AWS.” To get started, you give Vantage a set of read permissions to your AWS services and the tool will automatically profile everything in your account. The service refreshes this list once per hour, but users can also refresh their lists manually.

Given that it’s often hard enough to know which AWS services you are actually using, that alone is a useful feature. “That’s the number one use case,” he said. “What are we paying for and what do we have?”

At the core of Vantage is what the team calls “views,” which allows you to see which resources you are using. What is interesting here is that this is quite a flexible system and allows you to build custom views to see which resources you are using for a given application across regions, for example. Those may include Lambda, storage buckets, your subnet, code pipeline and more.

On the cost-tracking side, Vantage currently only offers point-in-time costs, but Schaechter tells me that the team plans to add historical trends as well to give users a better view of their cloud spend.

Schaechter and his co-founder bootstrapped the company and he noted that before he wants to raise any money for the service, he wants to see people paying for it. Currently, Vantage offers a free plan, as well as paid “pro” and “business” plans with additional functionality.

Image Credits: Vantage 

The built environment will be one of tech’s next big platforms

By Jonathan Shieber

From the beginning, the plan for Sidewalk Labs (a subsidiary of Alphabet and — by extension — a relative of Google) to develop a $1.3 billion tech-enabled real estate project on the Toronto waterfront was controversial.

Privacy advocates had justified concerns about the Google-adjacent company’s ability to capture a near-total amount of data from the residents of the development or any city-dweller that wandered into its high-tech panopticon.

But Alphabet, Sidewalk Labs’ leadership and even Canada’s popular prime minister, Justin Trudeau, had high hopes for the project.

Startups working in real estate technology managed to nab a record $3.7 billion from investors in the first quarter of the year.

“Successful cities around the world are wrestling with the same challenges of growth, from rising costs of living that price out the middle class, to congestion and ever-longer commutes, to the challenges of climate change. Sidewalk Labs scoured the globe for the perfect place to create a district focused on solutions to these pressing challenges, and we found it on Toronto’s Eastern Waterfront — along with the perfect public-sector partner, Waterfront Toronto,” said Sidewalk Labs chief executive Dan Doctoroff, the former deputy mayor of New York, in a statement announcing the launch in 2017. “This will not be a place where we deploy technology for its own sake, but rather one where we use emerging digital tools and the latest in urban design to solve big urban challenges in ways that we hope will inspire cities around the world.”

From Sidewalk Labs’ perspective, the Toronto project would be an ideal laboratory that the company and the city of Toronto could use to explore the utility and efficacy of the latest and greatest new technologies meant to enhance city living and make it more environmentally sustainable.

The company’s stated goal, back in 2017 was “to create a place that encourages innovation around energy, waste and other environmental challenges to protect the planet; a place that provides a range of transportation options that are more affordable, safe and convenient than the private car; a place that embraces adaptable buildings and new construction methods to reduce the cost of housing and retail space; a place where public spaces welcome families to enjoy the outdoors day and night, and in all seasons; a place that is enhanced by digital technology and data without giving up the privacy and security that everyone deserves.”

From a purely engineering perspective, integrating these new technologies into a single site to be a test case made some sense. From a community development perspective, it was a nightmare. Toronto residents began to see the development as little more than a showroom for a slew of privacy-invading innovations that Sidewalk could then spin up into companies — or a space where startup companies could test their tech on a potentially unwitting population.

So when the economic implications of the global COVID-19 pandemic started to become clear back in March of this year, it seemed as good a time as any for Sidewalk Labs to shutter the project.

“[As] unprecedented economic uncertainty has set in around the world and in the Toronto real estate market, it has become too difficult to make the 12-acre project financially viable without sacrificing core parts of the plan we had developed together with Waterfront Toronto to build a truly inclusive, sustainable community,” Doctoroff said in a statement. “And so, after a great deal of deliberation, we concluded that it no longer made sense to proceed with the Quayside project.”

With a $50B run rate in reach, can anyone stop AWS?

By Ron Miller

AWS, Amazon’s flourishing cloud arm, has been growing at a rapid clip for more than a decade. An early public cloud infrastructure vendor, it has taken advantage of first-to-market status to become the most successful player in the space. In fact, one could argue that many of today’s startups wouldn’t have gotten off the ground without the formation of cloud companies like AWS giving them easy access to infrastructure without having to build it themselves.

In Amazon’s most-recent earnings report, AWS generated revenues of $11.6 billion, good for a run rate of more than $46 billion. That makes the next AWS milestone a run rate of $50 billion, something that could be in reach in less than two quarters if it continues its pace of revenue growth.

The good news for competing companies is that in spite of the market size and relative maturity, there is still plenty of room to grow.

While the cloud division’s growth is slowing in percentage terms as it comes firmly up against the law of large numbers in which AWS has to grow every quarter compared to an ever-larger revenue base. The result of this dynamic is that while AWS’ year-over-year growth rate is slowing over time — from 35% in Q3 2019 to 29% in Q3 2020 — the pace at which it is adding $10 billion chunks of annual revenue run rate is accelerating.

At the AWS re:Invent customer conference this year, AWS CEO Andy Jassy talked about the pace of change over the years, saying that it took the following number of months to grow its run rate by $10 billion increments:

123 months ($0-$10 billion) 23 months ($10 billion-$20 billion) 13 months ($20 billion-$30 billion) 12 months ($30 billion to $40 billion)

Image Credits: TechCrunch (data from AWS)

Extrapolating from the above trend, it should take AWS fewer than 12 months to scale from a run rate of $40 billion to $50 billion. Stating the obvious, Jassy said “the rate of growth in AWS continues to accelerate.” He also took the time to point out that AWS is now the fifth-largest enterprise IT company in the world, ahead of enterprise stalwarts like SAP and Oracle.

What’s amazing is that AWS achieved its scale so fast, not even existing until 2006. That growth rate makes us ask a question: Can anyone hope to stop AWS’ momentum?

The short answer is that it doesn’t appear likely.

Cloud market landscape

A good place to start is surveying the cloud infrastructure competitive landscape to see if there are any cloud companies that could catch the market leader. According to Synergy Research, AWS remains firmly in front, and it doesn’t look like any competitor could catch AWS anytime soon unless some market dynamic caused a drastic change.

Synergy Research Cloud marketshare leaders. Amazon is first, Microsoft is second and Google is third.

Image Credits: Synergy Research

With around a third of the market, AWS is the clear front-runner. Its closest and fiercest rival Microsoft has around 20%. To put that into perspective a bit, last quarter AWS had $11.6 billion in revenue compared to Microsoft’s $5.2 billion Azure result. While Microsoft’s equivalent cloud number is growing faster at 47%, like AWS, that number has begun to drop steadily while it gains market share and higher revenue and it falls victim to that same law of large numbers.

Google expands its cloud with new regions in Chile, Germany and Saudi Arabia

By Frederic Lardinois

It’s been a busy year of expansion for the large cloud providers, with AWS, Azure and Google aggressively expanding their data center presence around the world. To cap off the year, Google Cloud today announced a new set of cloud regions, which will go live in the coming months and years. These new regions, which will all have three availability zones, will be in Chile, Germany and Saudi Arabia. That’s on top of the regions in Indonesia, South Korea, the U.S. (Las Vegas and Salt Lake City) that went live this year — and the upcoming regions in France, Italy, Qatar and Spain the company also announced over the course of the last twelve months.

Image Credits: Google

In total, Google currently operates 24 regions with 73 availability zones, not counting those it has announced but that aren’t live yet. While Microsoft Azure is well ahead of the competition in terms of the total number of regions (though some still lack availability zones), Google is now starting to pull even with AWS, which currently offers 24 regions with a total of 77 availability zones. Indeed, with its 12 announced regions, Google Cloud may actually soon pull ahead of AWS, which is currently working on six new regions.

The battleground may soon shift away from these large data centers, though, with a new focus on edge zones close to urban centers that are smaller than the full-blown data centers the large clouds currently operate but that allow businesses to host their services even closer to their customers.

All of this is a clear sign of how much Google has invested in its cloud strategy in recent years. For the longest time, after all, Google Cloud Platform lagged well behind its competitors. Only three years ago, Google Cloud offered only 13 regions, for example. And that’s on top of the company’s heavy investment in submarine cables and edge locations.

AWS expands on SageMaker capabilities with end-to-end features for machine learning

By Jonathan Shieber

Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.

As machine learning moves into the mainstream, business units across organizations will find applications for automation,  and AWS is trying to make the development of those bespoke applications easier for its customers.

“One of the best parts of having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability, and automation at scale.”

Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile, and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.

The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains over 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.

Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.

Another new tool that Amazon Web Services touted was its workflow management and automation toolkit, Pipelines. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.

To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye towards better transparency on how models were set up. There are open source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.

Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables to developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data cross multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.

Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.

Sidewalk Infrastructure Partners looks to make CA power grids more reliable with a $100 million investment

By Jonathan Shieber

Sidewalk Infrastructure Partners, the company which spun out of Alphabet’s Sidewalk Labs to fund, develop, and own the next generation of infrastructure, has unveiled its latest project — Reslia, which focuses on upgrading the efficiency and reliability of power grids.

Through a $20 million equity investment in the startup OhmConnect, and an $80 million commitment to develop a demand response program leveraging OhmConnect’s technology and services across the state of California, Sidewalk Infrastructure Partners intends to plant a flag for demand-response technologies as a key pathway to ensuring stable energy grids across the country.

‘We’re creating a virtual power plant,” said Sidewalk Infrastructure Partners co-CEO, Jonathan Winer. “With a typical power plant … it’s project finance, but for a virtual power plant… We’re basically going to subsidize the rollout of smart devices.”

The idea that people will respond to signals from the grid isn’t a new one, as Winer himself acknowledged in an interview. But the approach that Sidewalk Infrastructure Partners is taking, alongside OhmConnect, to roll out the incentives to residential customers through a combination of push notifications and payouts, is novel. “The first place people focused is on commercial and industrial buildings,” he said. 

What drew Sidewalk to the OhmConnect approach was the knowledge of the end consumer that OhmConnect’s management team brought to the table The company’s chief technology officer was the former chief technology officer of Zynga, Winer noted.

“What’s cool about the OhmConnect platform is that it empowers participation,” Winer said. “Anyone can enroll in these programs. If you’re an OhmConnect user and there’s a blackout coming, we’ll give you five bucks if you turn down your thermostat for the next two hours.”

Illustration of Sidewalk Infrastructure Partners Resilia Power Plant. Image Credit: Sidewalk Infrastructure Partners

The San Francisco-based demand-response company already has 150,000 users on its platform, and has paid out something like $1 million to its customers during the brownouts and blackouts that have roiled California’s electricity grid over the past year.

The first collaboration between OhmConnect and Sidewalk Infrastructure Partners under the Resilia banner will be what the companies are calling a “Resi-Station” — a 550 megawatt capacity demand response program that will use smart devices to power targeted energy reductions.

At full scale, the companies said that the project will be the largest residential virtual power plant in the world. 

“OhmConnect has shown that by linking together the savings of many individual consumers, we can reduce stress on the grid and help prevent blackouts,” said OhmConnect CEO Cisco DeVries. “This investment by SIP will allow us to bring the rewards of energy savings to hundreds of thousands of additional Californians – and at the same time build the smart energy platform of the future.” 

California’s utilities need all the help they can get. Heat waves and rolling blackouts spread across the state as it confronted some of its hottest temperatures over the summer. California residents already pay among the highest residential power prices in the counry at 21 cents per kilowatt hour, versus a national average of 13 cents.

During times of peak stress earlier in the year, OhmConnect engaged its customers to reduce almost one gigawatt hour of total energy usage. That’s the equivalent of taking 600,000 homes off the grid for one hour.

If the Resilia project was rolled out at scale, the companies estimate they could provide 5 gigawatt hours of energy conservation — that’s the full amount of the energy shortfall from the year’s blackouts and the equivalent of not burning 3.8 million pounds of coal.

Going forward, the Resilia energy efficiency and demand response platform will scale up other infrastructure innovations as energy grids shift from centralized power to distributed, decentralized generation sources, the company said. OhmConnect looks to be an integral part of that platform.

“The energy grid used to be uni-directional.. .we believe that in the near future the grid is going to be become bi-directional and responsive,” said Winer. “With our approach, this won’t be one investment. We’ll likely make multiple investments. [Vehicle-to-grid], micro-grid platforms, and generative design are going to be important.” 

The cloud can’t solve all your problems

By Walter Thompson
Jon Shanks Contributor
Jon Shanks is CEO and co-founder of cloud-native delivery platform Appvia.

The way a team functions and communicates dictates the operational efficiency of a startup and sets the scene for its culture. It’s way more important than what social events and perks are offered, so it’s the responsibility of a founder and/or CEO to provide their team with a technology approach that will empower them to achieve and succeed — now and in the future.

With that in mind, moving to the cloud might seem like a no-brainer because of its huge benefits around flexibility, accessibility and the potential to rapidly scale, while keeping budgets in check.

But there’s an important consideration here: Cloud providers won’t magically give you efficient teams.

Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond.

It will get you going in the right direction, but you need to think even farther ahead. Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond. Let’s look at how you approach and manage your cloud infrastructure will impact the effectiveness of your teams and your ability to scale.

Hindsight is 20/20

Adopting cloud is easy, but adopting it properly with best practices and in a secure way? Not so much. You might think that when you move to cloud, the cloud providers will give you everything you need to succeed. But even though they’re there to provide a wide breadth of services, these services won’t necessarily have the depth that you will need to run efficiently and effectively.

Yes, your cloud infrastructure is working now, but think beyond the first prototype or alpha and toward production. Considering where you want to get to, and not just where you are, will help you avoid costly mistakes. You definitely don’t want to struggle through redefining processes and ways of working when you’re also managing time sensitivities and multiple teams.

If you don’t think ahead, you’ll have to put all new processes in. It will take a whole lot longer, cost more money and cause a lot more disruption to teams than if you do it earlier.

For any founder, making strategic technology decisions right now should be a primary concern. It feels more natural to put off those decisions until you come face to face with the problem, but you’ll just end up needing to redo everything as you scale and cause your teams a world of hurt. If you don’t give this problem attention at the beginning, you’re just scaling the problems with the team. Flaws are then embedded within your infrastructure, and they’ll continue to scale with the teams. When these things are rushed, corners are cut and you will end up spending even more time and money on your infrastructure.

Build effective teams and reduce bottlenecks

When you’re making strategic decisions on how to approach your technology stack and cloud infrastructure, the biggest consideration should be what makes an effective team. Given that, keep these things top of mind:

  • Speed of delivery: Having developers able to self-serve cloud infrastructure with best practices built-in will enable speed. Development tools that factor in visibility and communication integrations for teams will give transparency on how they are iterating, problems, bugs or integration failures.
  • Speed of testing: This is all about ensuring fast feedback loops as your team works on critical new iterations and features. Developers should be able to test as much as possible locally and through continuous integration systems before they are ready for code review.
  • Troubleshooting problems: Good logging, monitoring and observability services, gives teams awareness of issues and the ability to resolve problems quickly or reproduce customer complaints in order to develop fixes.

Google acquires Actifio to step into the area of data management and business continuity

By Ingrid Lunden

In the same week that Amazon is holding its big AWS confab, Google is also announcing a move to raise its own enterprise game with Google Cloud. Today the company announced that it is acquiring Actifio, a data management company that helps companies with data continuity to be better prepared in the event of a security breach or other need for disaster recovery. The deal squares Google up as a competitor against the likes of Rubrik, another big player in data continuity.

The terms of the deal were not disclosed in the announcement; we’re looking and will update as we learn more. Notably, when the company was valued at over $1 billion in a funding round back in 2014, it had said it was preparing for an IPO (which never happened). PitchBook data estimated its value at $1.3 billion in 2018, but earlier this year it appeared to be raising money at about a 60% discount to its recent valuation, according to data provided to us by Prime Unicorn Index.

The company was also involved in a patent infringement suit against Rubrik, which it also filed earlier this year.

It had raised around $461 million, with investors including Andreessen Horowitz, TCV, Tiger, 83 North, and more.

With Actifio, Google is moving into what is one of the key investment areas for enterprises in recent years. The growth of increasingly sophisticated security breaches, coupled with stronger data protection regulation, has given a new priority to the task of holding and using business data more responsibly, and business continuity is a cornerstone of that.

Google describes the startup as as a “leader in backup and disaster recovery” providing virtual copies of data that can be managed and updated for storage, testing, and more. The fact that it covers data in a number of environments — including SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, virtual machines (VMs) in VMware, Hyper-V, physical servers, and of course Google Compute Engine — means that it also gives Google a strong play to work with companies in hybrid and multi-vendor environments rather than just all-Google shops.

“We know that customers have many options when it comes to cloud solutions, including backup and DR, and the acquisition of Actifio will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios,” writes Brad Calder, VP, engineering, in the blog post. :In addition, we are committed to supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”

The company will join Google Cloud.

“We’re excited to join Google Cloud and build on the success we’ve had as partners over the past four years,” said Ash Ashutosh, CEO at Actifio, in a statement. “Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”

AWS announces Panorama, a device that adds machine learning technology to any camera

By Jonathan Shieber

AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.

Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.

Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.

Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.

Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.

As we wrote in 2018:

DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

 

Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.

And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.

Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.

 

AWS updates its edge computing solutions with new hardware and Local Zones

By Frederic Lardinois

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

AWS adds natural language search service for business intelligence from its data sets

By Jonathan Shieber

When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.

At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.

Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.

“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.

That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.

“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”

It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”

AWS launches SageMaker Data Wrangler, a new data preparation service for machine learning

By Frederic Lardinois

AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.

AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.

As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.

Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.

All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.

 

It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.

AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

By Jonathan Shieber

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

AWS launches Trainium, its new custom ML training chip

By Frederic Lardinois

At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.

It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.

New instances based on these custom chips will launch next year.

The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.

In addition, AWS is also partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training as well. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.

These new chips will make their debut in the AWS cloud in the first half of 2021.

Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.

Trainium, it’s worth noting, will use the same SDK as Inferentia.

“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.”

 

AWS brings the Mac mini to its cloud

By Frederic Lardinois

AWS today opened its re:Invent conference with a surprise announcement: the company is bringing the Mac mini to its cloud. These new EC2 Mac instances, as AWS calls them, are now available in preview. They won’t come cheap, though.

The target audience here — and the only one AWS is targeting for now — is developers who want cloud-based build and testing environments for their Mac and iOS apps. But it’s worth noting that with remote access, you get a fully-featured Mac mini in the cloud, and I’m sure developers will find all kinds of other use cases for this as well.

Given the recent launch of the M1 Mac minis, it’s worth pointing out that the hardware AWS is using — at least for the time being — are i7 machines with six physical and 12 logical cores and 32 GB of memory. Using the Mac’s built-in networking options, AWS connects them to its Nitro System for fast network and storage access. This means you’ll also be able to attach AWS block storage to these instances, for example.

Unsurprisingly, the AWS team is also working on bringing Apple’s new M1 Mac minis into its data centers. The current plan is to roll this out “early next year,” AWS tells me, and definitely within the first half of 2021. Both AWS and Apple believe that the need for Intel-powered machines won’t go away anytime soon, though, especially given that a lot of developers will want to continue to run their tests on Intel machines for the foreseeable future.

David Brown, AWS’s vice president of EC2, tells me that these are completely unmodified Mac minis. AWS only turned off Wi-Fi and Bluetooth. It helps, Brown said, that the minis fit nicely into a 1U rack.

“You can’t really stack them on shelves — you want to put them in some sort of service sled [and] it fits very well into a service sled and then our cards and all the various things we have to worry about, from an integration point of view, fit around it and just plug into the Mac mini through the ports that it provides,” Brown explained. He admitted that this was obviously a new challenge for AWS. The only way to offer this kind of service is to use Apple’s hardware, after all.

Image Credits: AWS

It’s also worth noting that AWS is not virtualizing the hardware. What you’re getting here is full access to your own device that you’re not sharing with anybody else. “We wanted to make sure that we support the Mac Mini that you would get if you went to the Apple store and you bought a Mac mini,” Brown said.

Unlike with other EC2 instances, whenever you spin up a new Mac instance, you have to pre-pay for the first 24 hours to get started. After those first 24 hours, prices are by the second, just like with any other instance type AWS offers today.

AWS will charge $1.083 per hour, billed by the second. That’s just under $26 to spin up a machine and run it for 24 hours. That’s quite a lot more than what some of the small Mac mini cloud providers are charging (we’re generally talking about $60 or less per month for their entry-level offerings and around two to three times as much for a comparable i7 machine with 32GB of RAM).

Image Credits: Ron Miller/TechCrunch

Until now, Mac mini hosting was a small niche in the hosting market, though it has its fair number of players, with the likes of MacStadium, MacinCloud, MacWeb and Mac Mini Vault vying for their share of the market.

With this new offering from AWS, they are now facing a formidable competitor, though they can still compete on price. AWS, however, argues that it can give developers access to all of the additional cloud services in its portfolio, which sets it apart from all of the smaller players.

“The speed that things happen at [other Mac mini cloud providers] and the granularity that you can use those services at is not as fine as you get with a large cloud provider like AWS,” Brown said. “So if you want to launch a machine, it takes a few days to provision and somebody puts a machine in a rack for you and gives you an IP address to get to it and you manage the OS. And normally, you’re paying for at least a month — or a longer period of time to get a discount. What we’ve done is you can literally launch these machines in minutes and have a working machine available to you. If you decide you want 100 of them, 500 of them, you just ask us for that and we’ll make them available. The other thing is the ecosystem. All those other 200-plus AWS services that you’re now able to utilize together with the Mac mini is the other big difference.”

Brown also stressed that Amazon makes it easy for developers to use different machine images, with the company currently offering images for macOS Mojave and Catalina, with Big Sure support coming “at some point in the future.” And developers can obviously create their own images with all of the software they need so they can reuse them whenever they spin up a new machine.

“Pretty much every one of our customers today has some need to support an Apple product and the Apple ecosystem, whether it’s iPhone, iPad or  Apple TV, whatever it might be. They’re looking for that bold use case,” Brown said. “And so the problem we’ve really been focused on solving is customers that say, ‘hey, I’ve moved all my server-side workloads to AWS, I’d love to be able to move some of these build workflows, because I still have some Mac minis in a data center or in my office that I have to maintain. I’d love that just to be on AWS.’ ”

AWS’s marquee launch customers for the new service are Intuit, Ring and mobile camera app FiLMiC.

“EC2 Mac instances, with their familiar EC2 interfaces and APIs, have enabled us to seamlessly migrate our existing iOS and macOS build-and-test pipelines to AWS, further improving developer productivity,” said Pratik Wadher, vice president of Product Development at Intuit. “We‘re experiencing up to 30% better performance over our data center infrastructure, thanks to elastic capacity expansion, and a high availability setup leveraging multiple zones. We’re now running around 80% of our production builds on EC2 Mac instances, and are excited to see what the future holds for AWS innovation in this space.”

The new Mac instances are now available in a number of AWS regions. These include US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Singapore), with other regions to follow soon.

❌