AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.
Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.
Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.
Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.
Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.
As we wrote in 2018:
DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.
Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.
And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.
Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.
AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.
In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.
The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.
As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.
The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.
With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.
When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.
At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.
Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.
“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.
That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.
“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”
It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”
AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.
AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.
As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.
Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.
All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.
It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.
AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.
At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.
The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.
For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.
“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.
At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.
It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.
New instances based on these custom chips will launch next year.
The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.
In addition, AWS is also partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training as well. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.
These new chips will make their debut in the AWS cloud in the first half of 2021.
Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.
Trainium, it’s worth noting, will use the same SDK as Inferentia.
“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.”
Uber today announced the official completion of its Postmates acquisition deal, which it announced originally back in July. The all-stock deal, valued at around $2.65 billion at the time of its disclosure, sees Postmates join Uber, while continuing to operate as a separate service with its own branding and front-end – while some backend operations, including a shared pool of drivers, will merge.
Uber detailed some of its further thinking around the newly combined companies and what that will mean for the businesses they work with in a new blog post. The company posited the move as of benefit to the merchant population they work with, and alongside the official closure announced a new initiative to encourage and gather customer feedback on the merchant side.
They’re calling it a “regional listening exercise” to be run beginning next year, wherein they’ll work with local restaurant associations and chambers of commerce to hear concerns from local business owners in their own communities. This sounds similar in design to Uber’s prior efforts to focus on driver feedback from a couple of years ago in order to improve the way it works with that side of its double-sided marketplace.
Focusing on the needs of its merchant population is doubly important given the current global pandemic, which has seen Uber Eats emerge as even more of a key infrastructure component in the food service and grocery industries as people seek more delivery options in order to better comply with stay-at-home orders and other public safety recommendations.
AWS today opened its re:Invent conference with a surprise announcement: the company is bringing the Mac mini to its cloud. These new EC2 Mac instances, as AWS calls them, are now available in preview. They won’t come cheap, though.
The target audience here — and the only one AWS is targeting for now — is developers who want cloud-based build and testing environments for their Mac and iOS apps. But it’s worth noting that with remote access, you get a fully-featured Mac mini in the cloud, and I’m sure developers will find all kinds of other use cases for this as well.
Given the recent launch of the M1 Mac minis, it’s worth pointing out that the hardware AWS is using — at least for the time being — are i7 machines with six physical and 12 logical cores and 32 GB of memory. Using the Mac’s built-in networking options, AWS connects them to its Nitro System for fast network and storage access. This means you’ll also be able to attach AWS block storage to these instances, for example.
Unsurprisingly, the AWS team is also working on bringing Apple’s new M1 Mac minis into its data centers. The current plan is to roll this out “early next year,” AWS tells me, and definitely within the first half of 2021. Both AWS and Apple believe that the need for Intel-powered machines won’t go away anytime soon, though, especially given that a lot of developers will want to continue to run their tests on Intel machines for the foreseeable future.
David Brown, AWS’s vice president of EC2, tells me that these are completely unmodified Mac minis. AWS only turned off Wi-Fi and Bluetooth. It helps, Brown said, that the minis fit nicely into a 1U rack.
“You can’t really stack them on shelves — you want to put them in some sort of service sled [and] it fits very well into a service sled and then our cards and all the various things we have to worry about, from an integration point of view, fit around it and just plug into the Mac mini through the ports that it provides,” Brown explained. He admitted that this was obviously a new challenge for AWS. The only way to offer this kind of service is to use Apple’s hardware, after all.
It’s also worth noting that AWS is not virtualizing the hardware. What you’re getting here is full access to your own device that you’re not sharing with anybody else. “We wanted to make sure that we support the Mac Mini that you would get if you went to the Apple store and you bought a Mac mini,” Brown said.
Unlike with other EC2 instances, whenever you spin up a new Mac instance, you have to pre-pay for the first 24 hours to get started. After those first 24 hours, prices are by the second, just like with any other instance type AWS offers today.
AWS will charge $1.083 per hour, billed by the second. That’s just under $26 to spin up a machine and run it for 24 hours. That’s quite a lot more than what some of the small Mac mini cloud providers are charging (we’re generally talking about $60 or less per month for their entry-level offerings and around two to three times as much for a comparable i7 machine with 32GB of RAM).
Until now, Mac mini hosting was a small niche in the hosting market, though it has its fair number of players, with the likes of MacStadium, MacinCloud, MacWeb and Mac Mini Vault vying for their share of the market.
With this new offering from AWS, they are now facing a formidable competitor, though they can still compete on price. AWS, however, argues that it can give developers access to all of the additional cloud services in its portfolio, which sets it apart from all of the smaller players.
“The speed that things happen at [other Mac mini cloud providers] and the granularity that you can use those services at is not as fine as you get with a large cloud provider like AWS,” Brown said. “So if you want to launch a machine, it takes a few days to provision and somebody puts a machine in a rack for you and gives you an IP address to get to it and you manage the OS. And normally, you’re paying for at least a month — or a longer period of time to get a discount. What we’ve done is you can literally launch these machines in minutes and have a working machine available to you. If you decide you want 100 of them, 500 of them, you just ask us for that and we’ll make them available. The other thing is the ecosystem. All those other 200-plus AWS services that you’re now able to utilize together with the Mac mini is the other big difference.”
Brown also stressed that Amazon makes it easy for developers to use different machine images, with the company currently offering images for macOS Mojave and Catalina, with Big Sure support coming “at some point in the future.” And developers can obviously create their own images with all of the software they need so they can reuse them whenever they spin up a new machine.
“Pretty much every one of our customers today has some need to support an Apple product and the Apple ecosystem, whether it’s iPhone, iPad or Apple TV, whatever it might be. They’re looking for that bold use case,” Brown said. “And so the problem we’ve really been focused on solving is customers that say, ‘hey, I’ve moved all my server-side workloads to AWS, I’d love to be able to move some of these build workflows, because I still have some Mac minis in a data center or in my office that I have to maintain. I’d love that just to be on AWS.’ ”
AWS’s marquee launch customers for the new service are Intuit, Ring and mobile camera app FiLMiC.
“EC2 Mac instances, with their familiar EC2 interfaces and APIs, have enabled us to seamlessly migrate our existing iOS and macOS build-and-test pipelines to AWS, further improving developer productivity,” said Pratik Wadher, vice president of Product Development at Intuit. “We‘re experiencing up to 30% better performance over our data center infrastructure, thanks to elastic capacity expansion, and a high availability setup leveraging multiple zones. We’re now running around 80% of our production builds on EC2 Mac instances, and are excited to see what the future holds for AWS innovation in this space.”
The new Mac instances are now available in a number of AWS regions. These include US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Singapore), with other regions to follow soon.
An Amazon Web Services outage has a wide effect, Salesforce might be buying Slack and Pinterest tests new support for virtual events. This is your Daily Crunch for November 25, 2020.
And for those of you who celebrate Thanksgiving: Enjoy! There will be no newsletter tomorrow, and then Darrell Etherington will be filling in for me on Friday.
The big story: Amazon Web Services stumble
Amazon Web Services began experiencing issues earlier today, which caused issues for sites and services that rely on its cloud infrastructure — as writer Zack Whittaker discovered when he tried to use his Roomba.
Amazon said the issue was largely localized to North America, and that it was working on a resolution. Meanwhile, a number of other companies, such as Adobe and Roku, have pointed to the AWS outage as the reason for their own service issues.
The tech giants
Slack’s stock climbs on possible Salesforce acquisition — News that Salesforce is interested in buying Slack sent shares of the smaller firm sharply higher today.
Pinterest tests online events with dedicated ‘class communities’ — The company has been spotted testing a new feature that allows users to sign up for Zoom classes through Pinterest.
France starts collecting tax on tech giants — This tax applies to companies that generate more than €750 million in revenue globally and €25 million in France, and that operate either a marketplace or an ad business.
Startups, funding and venture capital
Tiger Global invests in India’s Unacademy at $2B valuation — Unacademy helps students prepare for competitive exams to get into college.
WeGift, the ‘incentive marketing’ platform, collects $8M in new funding — Founded in 2016, WeGift wants to digitize the $700 billion rewards and incentives industry.
Cast.ai nabs $7.7M seed to remove barriers between public clouds — The company was started with the idea that developers should be able to get the best of each of the public clouds without being locked in.
Advice and analysis from Extra Crunch
Insurtech’s big year gets bigger as Metromile looks to go public — Metromile, a startup competing in the auto insurance market, is going public via SPAC.
Join us for a live Q&A with Sapphire’s Jai Das on Tuesday at 2 pm EST/11 am PST — Das has invested in companies like MuleSoft, Alteryx, Square and Sumo Logic.
(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)
Gift Guide: Smart exercise gear to hunker down and get fit with — Smart exercise and health gear is smarter than ever.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
Amazon Web Services is currently having an outage, taking a large swathes of the internet down with it.
Several AWS services are down at the as of early Wednesday, according to its status pages. That means any app, site or service that relies on AWS might also be down, too. (As I found out the hard way when my Roomba refused to connect to the app.)
Amazon says the issue is largely localized to North America. The company didn’t give a reason for the outage, only that it was experiencing increased error rates and that it was working on a resolution.
So far a number of companies that rely on AWS have tweeted out that they’re experiencing issues as a result, including Adobe and Roku. We’ll keep you updated as this outage continues.
An Amazon AWS outage is currently impacting Adobe Spark so you may be having issues accessing/editing your projects. We are actively working with AWS and will report when the issue has subsided. https://t.co/uoHPf44HjL for current Spark status. We apologize for any inconvenience!
— Adobe Spark (@AdobeSpark) November 25, 2020
We are working to resolve this quickly. We are impacted by the widespread AWS outage and hope to get our customers up and running soon. Most streaming should work as expected during this time.
— Roku Support (@RokuSupport) November 25, 2020
We do apologize for the inconvenience! Unfortunately, the issue is stemming from an AWS server outage, which is affecting many companies. We hope that the issue is resolved soon!
— Shipt (@Shipt) November 25, 2020
Resilience, a new biopharmaceutical company backed by $800 million in financing from investors including ARCH Venture Partners and 8VC, has emerged from stealth to transform the way that drugs and therapies are manufactured in the U.S.
Founded by ARCH Venture Partners investor Robert Nelsen, National Resilience Inc., which does business as Resilience was born out of Nelsen’s frustrations with the inept American response to the COVID-19 pandemic.
According to a statement the company will invest heavily in developing new manufacturing technologies across cell and gene therapies, viral vectors, vaccines and proteins.
Resilience’s founders identified problems in the therapeutic manufacturing process as one of the key problems that the industry faces in bringing new treatments to market — and that hurdle is exactly what the company was founded to overcome.
“COVID-19 has exposed critical vulnerabilities in medical supply chains, and today’s manufacturing can’t keep up with scientific innovation, medical discovery, and the need to rapidly produce and distribute critically important drugs at scale. We are committed to tackling these huge problems with a whole new business model,” said Nelsen in a statement.
The company brings together some of the leading investment firms in healthcare and biosciences including operating partners from Flagship Pioneering like Rahul Singhvi, who will serve as the company’s chief executive’ former Food and Drug Administration commissioner Scott Gottlieb, a partner at New Enterprise Associates and director on the Resilience board; and Patrick Yang, the former executive vice president and global head of technical operations at Roche/Genentech .
“It is critical that we adopt solutions that will protect the manufacturing supply chain, and provide more certainty around drug development and the ability to scale up the manufacturing of safe, effective but also more complex products that science is making possible,” said Dr. Gottlieb, in a statement. “RESILIENCE will enable these solutions by combining cutting edge technology, an unrivaled pool of talent, and the industry’s first shared service business model. Similar to Amazon Web Services, RESILIENCE will empower drug developers with the tools to more fully align discovery, development, and manufacturing; while offering new opportunities to invest in downstream innovations in formulation and manufacturing earlier, while products are still being conceived and developed.”
Other heavy hitters in the world of medicine and biotechnology who are working with the company include Frances Arnold, the Nobel Prize-winning professor from the California Institute of Technology; George Barrett, the former chief executive of Cardinal Health; Susan Desmond-Hellmann, the former president of product development at Genentech; Kaye Foster, the former vice president of human resources at Johnson and Johnson; and Denice Torres, the former President of Johnson & Johnson Pharmaceutical and Consumer Companies.
Twitter is the latest social media site to allow users to experiment with posting disappearing content. Fleets, as Twitter calls them, allows its mobile users post short stories, like photos or videos with overlaying text, that are set to vanish after 24 hours.
But a bug meant that fleets weren’t deleting properly and could still be accessed long after 24 hours had expired. Details of the bug were posted in a series of tweets on Saturday, less than a week after the feature launched.
full disclosure: scraping fleets from public accounts without triggering the read notification
the endpoint is: https://t.co/332FH7TEmN
— cathode gay tube (@donk_enby) November 20, 2020
The bug effectively allowed anyone to access and download a user’s fleets without triggering a notification that the user’s fleet had been read and by whom. The implication is that this bug could be abused to archive a user’s fleets after they expire.
Using an app that’s designed to interact with Twitter’s back-end systems via its developer API. What returned was a list of fleets from the server. Each fleet had its own direct URL, which when opened in a browser would load the fleet as an image or a video. But even after the 24 hours elapsed, the server would still return links to fleets that had already disappeared from view in the Twitter app.
When reached, a Twitter spokesperson said a fix was on the way. “We’re aware of a bug accessible through a technical workaround where some Fleets media URLs may be accessible after 24 hours. We are working on a fix that should be rolled out shortly.”
Twitter acknowledged that the fix means that fleets should now expire properly, it said it won’t delete the fleet from its servers for up to 30 days — and that it may hold onto fleets for longer if they violate its rules. We checked that we could still load fleets from their direct URLs even after they expire.
Fleet with caution.
Broadband communication satellite company OneWeb has emerged from its Chapter 11 bankruptcy protection status, the company announced today. It’s now also officially owned by a consortium consisting of the UK government and India’s Bharti Global, and Neil Masterson is now installed as CEO, replacing outgoing chief executive Adrian Steckel, who will remain as a Board advisor.
OneWeb seems eager to get back to actively launching the satellites that will make up its 650-strong constellation – it has set December 17 as the target date for its next launch. The company has 74 satellite already on orbit across three prior launches, which occurred prior to its bankruptcy filing in March.
OneWeb’s acquisition by the combined UK government/Bharti Global tie-up was revealed in July, providing a path for the financially beleaguered company to get back to active status with $1 billion in equity funding. The UK-based company will continue to operate primarily from the UK via this new deal, and it’s being positioned as a key cornerstone in positioning the UK as a space sector leader and innovator.
The company also announced that its joint-venture manufacturing facility with Airbus has resumed operation in Florida, and will continue to produce new spacecraft for future launches. The plan is to launch additional satellites throughout next year and 2022, and then begin offering commercial service in select areas late in 2021, with a global service expansion intended for 2022.
Microsoft announced a few updates to its Edge browser today that are all about shopping. In addition to expanding the price comparison feature the team announced last month, Edge can now also automatically find coupons for you. In addition, the company is launching a new shopping hub in its Bing search engine. The timing here is undoubtedly driven by the holiday shopping season — though this year, it feels like Black Friday-style deals already started weeks ago.
The potential usefulness of the price comparison tools is pretty obvious. I’ve found this always worked reasonably well in Edge Collections — though at times it could also be a frustrating experience because it just wouldn’t pull any data for items you saved from some sites. Now, with this price comparison running in the background all the time, you’ll see a new badge pop up in the URL bar that lets you open the price comparison. And when you already found the best price, it’ll tell you that right away, too.
At least in the Edge Canary, where this has been available for a little bit already, this was also hit and miss. It seems to work just fine when you shop on Amazon, for example, as long as there’s only one SKU of an item. If there are different colors, sizes or other options available, it doesn’t really seem to kick in, which is a bit frustrating.
The coupons feature, too, is a bit of a disappointment. It works more consistently and seems to pull data from most of the standard coupon sites (think RetailMeNot and Slickdeals), but all it does is show sitewide coupons. Since most coupons only apply to a limited set of items, clicking on the coupon badge quickly feels like a waste of time. To be fair, the team implemented a nifty feature where at checkout, Bing will try to apply all of the coupons it found. That could be a potential time and money-saver. Given the close cooperation with the Bing team in other areas, this feels like an area of improvement, though. I turned it off.
Microsoft is also using today’s announcement to launch a new URL shortener in Edge. “Now, when you paste a link that you copied from the address bar, it will automatically convert from a long, nonsensical URL address to a short hyperlink with the website title. If you prefer the full URL, you can convert to plain text using the context menu,” Microsoft explains. I guess that makes sense in some scenarios. Most of the time, though, I just want the link (and no third-party in-between), so I hope this can easily be turned off, too.
Go SMS Pro, one of the most popular messaging apps for Android, is exposing photos, videos and other files sent privately by its users. Worse, the app maker has done nothing to fix the bug.
Security researchers at Trustwave discovered the flaw in August and contacted the app maker with a 90-day deadline to fix the issue, as is standard practice in vulnerability disclosure to allow enough time for a fix. But after the deadline elapsed without hearing back, the researchers went public.
Trustwave shared their findings with TechCrunch this week.
When a Go SMS Pro user sends a photo, video or other file to someone who doesn’t have the app installed, the app uploads the file to its servers, and lets the user share a web address by text message so the recipient can see the file without installing the app. But the researchers found that these web addresses were sequential. In fact, any time a file was shared — even between app users — a web address would be generated regardless. That meant anyone who knew about the predictable web address could have cycled through millions of different web addresses to users’ files.
Go SMS Pro has more than 100 million installs, according to its listing in Google Play.
TechCrunch verified the researcher’s findings. In viewing just a few dozen links, we found a person’s phone number, a screenshot of a bank transfer, an order confirmation including someone’s home address, an arrest record, and far more explicit photos than we were expecting, to be quite honest.
Karl Sigler, senior security research manager at Trustwave, said while it wasn’t possible to target any specific user, any file sent using the app is vulnerable to public access. “An attacker can create scripts that could throw a wide net across all the media files stored in the cloud instance,” he said.
We had about as much luck getting a response from the app maker as the researchers. TechCrunch emailed two email addresses associated with the app. One email immediately bounced back saying the email couldn’t be delivered due to a full inbox. The other email was opened, according to our email open tracker, but a follow-up email was not.
Since you might now want a messaging app that protects your privacy, we have you covered.
The term ‘DevOps’ has been rendered meaningless and developers still don’t have access to the right tools to put the overall idea into practice, the team behind DevOps startup OpsLevel argues. The company, which was co-founded by John Laban and Kenneth Rose, two of PagerDuty’s earliest employees, today announced that it has raised a $5 million seed funding round, led by Vertex Ventures. S28 Capital, Webb Investment Network and Union Capital also participated in this round, as well as a number of angels, including the three co-founders of PagerDuty .
“[PagerDuty] was an important part of the DevOps movement. Getting engineers on call was really important for DevOps, but on-call and getting paged about incidents and things, it’s very reactive in nature. It’s all about fixing incidents as quickly as possible. Ken [Rose] and I saw an opportunity to help companies take a more proactive stance. Nobody really wants to have any downtime or any security breaches in the first place. They want to prevent them before they happen.”
With that mission in mind, the team set out to bring engineering organizations back to the roots of DevOps by giving those teams ownership over their services and creating what Rose called a “you build it, you own it” culture. Service ownership, he noted, is something the team regularly sees companies struggle with. When teams move to microservices or even serverless architectures for their systems, it quickly becomes unclear who owns what and as a result, you end up with orphaned services that nobody is maintaining. The natural result of that is security and reliability issues. And at the same time, because nobody knows which systems already exist, other teams reinvent the wheel and rebuild the same service to solve their own problems.
“We’ve underinvested in tools to make DevOps actually work,” the team says in today’s announcement. “There’s a lot we still need to build to help engineering teams adopt service ownership and unlock the full power of DevOps.”
So at the core of OpsLevel is what the team calls a “service ownership platform,” starting with a catalog of the services that an engineering organization is currently running.
“What we’re trying to do is take back the meaning of DevOps,” said Laban. “We believe it’s been rendered meaningless and we wanted to refocus it on service ownership. We’re going to be investing heavily on building out our product, and then working with our customers to get them to really own their services and get really down to solving that problem.”
Among the companies OpsLevel is already working with are Segment, Zapier, Convoy and Under Armour. As the team noted, its service becomes most useful once a company runs somewhere around 20 or 30 different services. Before that, a wiki or spreadsheet is often enough to manage them, but at that point, those systems tend to break.
OpsLevel gives them different onramps to start cataloging their services. If they prefer to use a ‘config-as-code’ approach, they can use those YAML files as part of their existing Git workflows. But OpsLevel offers APIs that teams can plug into their various systems if they already have existing service creating workflows.
The company’s funding round closed in late September. The pandemic, the team said, didn’t really hinder its fundraising efforts, something I’ve lately heard from a lot of companies (though the ones I talk obviously to tend to be the ones that recently raised money).
“The reason why [we raised] is because we wanted to really invest in building out our product,” Laban said. “We’ve been getting this traction with our customers and we really wanted to double down and build out a lot of product and invest into our go-to-market team as well and really wanted to accelerate things.”
Seldon is a UK startup that specializes in the rarified world of development tools to optimize Machine Learning. What does this mean? Well, dear reader, it means that the “AI” that companies are so fond of trumpeting, does actually end up working.
It’s now raised a £7.1M Series A round co-led by AlbionVC and Cambridge Innovation Capital . The round also includes significant participation from existing investors Amadeus Capital Partners and Global Brain, with follow-on investment from other existing shareholders. The £7.1M funding will be used to accelerate R&D and drive commercial expansion, take Seldon Deploy – a new enterprise solution – to market, and double the size of the team over the next 18 months.
Key to its success is that its open-source project Seldon Core has over 700,000 models deployed to date, drastically reducing friction for users deploying ML models. The startup says its customers are getting productivity gains of as much as 92% as a result of utilizing Seldon’s product portfolio.
Alex Housley, CEO and founder of Seldon said: Speaking to TechCrunch, Housley explained that companies are using machine learning across thousands of use cases today, “but the model actually only generates real value when it’s actually running inside a real-world application.”
“So what we’ve seen emerge over these last few years are companies that specialize in specific parts of the machine learning pipeline, such as training version control features. And in our case we’re focusing on deployment. So what this means is that organizations can now build a fully bespoke AI platform that suits their needs, so they can gain a competitive advantage,” he said.
In addition, he said Seldon’s Open Source model means that companies are not locked-in: “They want to avoid locking as well they want to use tools from various different vendors. So this kind of intersection between machine learning, DevOps and cloud-native tooling is really accelerating a lot of innovation across enterprise and also within startups and growth-stage companies.”
Nadine Torbey, Investor AlbionVC added: “Seldon is at the forefront of the next wave of tech innovation, and the leadership team are true visionaries. Seldon has been able to build an impressive open-source community and add immediate productivity value to some of the world’s leading companies.”
Vin Lingathoti, Partner at Cambridge Innovation Capital said: “Machine learning has rapidly shifted from a nice-to-have to a must-have for enterprises across all industries. Seldon’s open-source platform operationalizes ML model development and accelerates the time-to-market by eliminating the pain points involved in developing, deploying and monitoring Machine Learning models at scale.”
Earlier this year, Instagram launched a new feature called “Guides,” which allowed creators to share tips, resources and other longer-form content in a dedicated tab on their user profiles. Initially, Instagram limited Guides to a select group of creators who were publishing content focused on mental health and well-being. Today, the company says it’s making the format available to all users, and expanding Guides to include other types of content, as well — including Products, Places, and Posts.
TechCrunch in August noted an expansion of Instagram Guides appeared to be in development, with a focus on allowing users to create travel guides and product recommendation guides, in addition to a more generic “posts” format.
This “Guides” format was designed to give Instagram creators and marketers a way to share long-form content on a social network that had been, until now, focused more on media — like photos and videos. By comparison, an Instagram Guide could look more like a blog post, as it could include text accompanied by photos, galleries and videos to illustrate the subject matter being discussed.
The feature could help increase users’ time in the app, since users wouldn’t have to click through to external websites and blogs to access these posts — for instance, through a link in the creator’s bio or through a link added to one of the creator’s Stories.
With the expansion to Products, Places and Posts, Instagram’s Guides can now cover more areas. Instagram says it made the feature easier to use, too. It may also feature Product Guides inside its new shopping destination on the platform, Instagram Shop, the company noted.
Visitors to Guides can share the Guides across their own Stories and in Direct Messages, expanding their reach even further.
Image Credits: Instagram
Also new today is an update to Instagram Search. Before, users could search for names, usernames, hashtags and locations. With the changes rolling out today, users will also now be able to use keywords that will surface content relevant to their interests. Along with Guides, the larger goal is to help keep Instagram users from leaving the app.
Instagram says the search update is available in English to all users in Canada, the U.S., U.K., Australia, New Zealand, and Ireland starting today. The expansion to Guides is rolling out now to all users.
Come June 1, 2021, Google will change its storage policies for free accounts — and not for the better. Basically, if you’re on a free account and a semi-regular Google Photos user, get ready to pay up next year and subscribe to Google One.
Currently, every free Google Account comes with 15 GB of online storage for all your Gmail, Drive and Photos needs. Email and the files you store in Drive already counted against those 15 GB, but come June 1, all Docs, Sheets, Slides, Drawings, Forms or Jamboard files will count against the free storage as well. Those tend to be small files, but what’s maybe most important here, virtually all of your Photos uploads will now count against those 15 GB as well.
That’s a bid deal because today, Google Photos lets you store unlimited images (and unlimited video, if it’s in HD) for free as long as they are under 16MP in resolution or you opt to have Google degrade the quality. Come June of 2021, any new photo or video uploaded in high quality, which currently wouldn’t count against your allocation, will count against those free 15 GB.
As people take more photos every year, that free allotment won’t last very long. Google argues that 80 percent of its users will have at least three years to reach those 15 GB. Given that you’re reading TechCrunch, though, chances are you’re in those 20 percent that will run out of space much faster (or you’re already on a Google One plan).
Some good news: to make this transition a bit easier, photos and videos uploaded in high quality before June 1, 2021 will not count toward the 15 GB of free storage. As usual, original quality images will continue to count against it, though. And if you own a Pixel device, even after June 1, you can still upload an unlimited number of high-quality images from those.
To let you see how long your current storage will last, Google will now show you personalized estimates, too, and come next June, the company will release a new free tool for Photos that lets you more easily manage your storage. It’ll also show you dark and blurry photos you may want to delete — but then, for a long time Google’s promise was you didn’t have to worry about storage (remember Google’s old Gmail motto? ‘Archive, don’t delete!’)
In addition to these storage updates, there’s a few additional changes worth knowing about. If your account is inactive in Gmail, Drive or Photos for more than two years, Google ‘may’ delete the content in that product. So if you use Gmail but don’t use Photos for two years because you use another service, Google may delete any old photos you had stored there. And if you stay over your storage limit for two years, Google “may delete your content across Gmail, Drive and Photos.”
Cutting back a free and (in some cases) unlimited service is never a great move. Google argues that it needs to make these changes to “continue to provide everyone with a great storage experience and to keep pace with the growing demand.”
People now upload more than 4.3 million GB to Gmail, Drive and Photos every day. That’s not cheap, I’m sure, but Google also controls every aspect of this and must have had some internal projections of how this would evolve when it first set those policies.
To some degree, though, this was maybe to be expected. This isn’t the freewheeling Google of 2010 anymore, after all. We’ve already seen some indications that Google may reserve some advanced features for Google One subscribers in Photos, for example. This new move will obviously push more people to pay for Google One and more money from Google One means a little bit less dependence on advertising for the company.
The Federal Trade Commission has announced a settlement with Zoom, after it accused the video calling giant of engaging in “a series of deceptive and unfair practices that undermined the security of its users,” in part by claiming the encryption was stronger than it actually was.
Cast your mind back earlier this year at the height of the pandemic lockdown, which forced millions to work from home and rely on Zoom for work meetings and remote learning. At the time, Zoom claimed video calls were protected by “end-to-end” encryption, a way of scrambling calls that makes it near-impossible for anyone — even Zoom — to listen in.
But those claims were false.
“In reality, the FTC alleges, Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised,” said the FTC in a statement Monday. “Zoom’s misleading claims gave users a false sense of security, according to the FTC’s complaint, especially for those who used the company’s platform to discuss sensitive topics such as health and financial information.”
Zoom quickly admitted it was wrong, prompting the company to launch a 90-day turnaround effort, which included the rollout of end-to-end encryption to its users. That eventually months later in late October — but not without another backtrack after Zoom initially said free users could not use end-to-end encryption.
The FTC also alleged in its complaint that Zoom stored some meeting recordings unencrypted on its servers for up to two months, and compromised the security of its users by covertly installing a web server on its users’ computers in order for users to jump into meetings faster. This, the FTC said, “was unfair and violated the FTC Act.” Zoom pushed out an update which removed the web server, but Apple also intervened to remove the vulnerable component from its customers’ computers.
In its statement, the FTC said it has prohibited Zoom from misrepresenting its security and privacy practices going forward, and has agreed to start a vulnerability management program and implement stronger security across its internal network.
Zoom did not immediately respond to a request for comment.