People are getting frustrated that Stories are everywhere now, but Google Maps is keeping it old school. Instead of adding tiny circles to the top of the app’s screen, Google Maps is introducing its own news feed. Technically, Google calls its new feature the “Community Feed,” as it includes posts from a local area. However, it’s organized as any other news feed would be — a vertically scrollable feed with posts you can “Like” by tapping on a little thumbs up icon.
The feed, which is found with the Explore tab of the Google Maps app, is designed to make it easier to find the most recent news, updates, and recommendations from trusted local sources. This includes posts business owners create using Google My Business to alert customers to new deals, menu updates, and other offers. At launch, Google says the focus will be on highlighting posts from food and drink businesses.
For years, businesses have been able to make these sorts of posts using Google’s tools. But previously, users would have to specifically tap to follow the business’s profile in order to receive their updates.
Now, these same sort of posts will be surfaced to even those Google Maps users who didn’t take the additional step of following a particular business. This increased exposure has impacted the posts’ views, Google says. In early tests of Community Feed ahead of its public launch, Google found that businesses’ posts saw more than double the number of views than before the feed existed.
Image Credits: Google
In addition to posts from businesses, the new Community Feed will feature content posted by Google users you follow as well as recent reviews from Google’s Local Guides — the volunteer program where users share their knowledge about local places in order to earn perks, such as profile badges, early access to Google features, and more. Select publishers will participate in the Community Feed, too, including The Infatuation and other news sources from Google News, when relevant.
Much of the information found in the Community Feed was available elsewhere in Google Maps before today’s launch.
For example, the Google Maps’ Updates tab offered a similar feed that included businesses’ posts along with news, recommendations, stories, and other features designed to encourage discovery. Meanwhile, the Explore tab grouped businesses into thematic groupings (e.g. outdoor dining venues, cocktail bars, etc.) at the top of the screen, then allowed users to browse other lists and view area photos.
With the update, those groups of businesses by category will still sit at the top of the screen, but the rest of the tab is dedicated to the scrollable feed. This gives the tab a more distinct feel than it had before. It could even position Google to venture into video posts in the future, given the current popularity of TikTok-style short-form video feeds that have now cloned by Instagram and Snapchat.
Image Credits: Google
Today, it’s a more standard feed, however. As you scroll down, you can tap “Like” on those posts you find interesting to help better inform your future recommendations. You can also tap “Follow” on businesses you want to hear more from, which will send their alerts to your Updates tab, as well. Thankfully, there aren’t comments.
Google hopes the change will encourage users to visit the app more often in order to find out what’s happening in their area — whether that’s a new post from a business or a review from another user detailing some fun local activity, like a day trip or new hiking spot, for example.
The feature can be used when traveling or researching other areas, too, as the “Community Feed” you see is designated not based on where you live or your current location, but rather where you’re looking on the map.
The feed is the latest in what’s been a series of updates designed to make Google Maps more of a Facebook rival. Over the past few years, Google Maps has added features that allowed users to follow businesses, much like Facebook does, as well as message those businesses directly in the app, similar to Messenger. Businesses, meanwhile, have been able to set up their own profile in Google Maps, where they could add a logo, cover photo, and pick short name — also a lot like Facebook Pages offer today.
With the launch of a news feed-style feature, Google’s attempt to copy Facebook is even more obvious.
Google says the feature is rolling out globally on Google Maps for iOS and Android.
IT security software company Ivanti has acquired two security companies: enterprise mobile security firm MobileIron, and corporate virtual network provider Pulse Secure.
In a statement on Tuesday, Ivanti said it bought MobileIron for $872 million in stock, with 91% of the shareholders voting in favor of the deal; and acquired Pulse Secure from its parent company Siris Capital Group, but did not disclose the buying price.
The deals have now closed.
Ivanti was founded in 2017 after Clearlake Capital, which owned Heat Software, bought Landesk from private equity firm Thoma Bravo, and merged the two companies to form Ivanti. The combined company, headquartered in Salt Lake City, focuses largely on enterprise IT security, including endpoint, asset, and supply chain management. Since its founding, Ivanti went on to acquire several other companies, including U.K.-based Concorde Solutions and RES Software.
If MobileIron and Pulse Secure seem familiar, both companies have faced their fair share of headlines this year after hackers began exploiting vulnerabilities found in their technologies.
Just last month, the U.K. government’s National Cyber Security Center published an alert that warned of a remotely executable bug in MobileIron, patched in June, allowing hackers to break into enterprise networks. U.S. Homeland Security’s cybersecurity advisory unit CISA said that the bug was being actively used by advanced persistent threat (APT) groups, typically associated with state-backed hackers.
Meanwhile, CISA also warned that Pulse Secure was one of several corporate VPN providers with vulnerabilities that have since become a favorite among hackers, particularly ransomware actors, who abuse the bugs to gain access to a network and deploy the file-encrypting ransomware.
AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.
Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.
Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.
Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.
Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.
As we wrote in 2018:
DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.
Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.
And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.
Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.
One of the areas that is often left behind when it comes to cloud computing is the industrial sector. That’s because these facilities often have older equipment or proprietary systems that aren’t well suited to the cloud. Amazon wants to change that, and today the company announced a slew of new services at AWS re:Invent aimed at helping the industrial sector understand their equipment and environments better.
For starters, the company announced Amazon Monitron, which is designed to monitor equipment and send signals to the engineering team when the equipment could be breaking down. If industrial companies can know when their equipment is breaking, it allows them to repair on it their own terms, rather than waiting until after it breaks down and having the equipment down at what could be an inopportune time.
As AWS CEO Andy Jassy says, an experienced engineer will know when equipment is breaking down by a certain change in sound or a vibration, but if the machine could tell you even before it got that far, it would be a huge boost to these teams.
“…a lot of companies either don’t have sensors, they’re not modern powerful sensors, or they are not consistent and they don’t know how to take that data from the sensors and send it to the cloud, and they don’t know how to build machine learning models, and our manufacturing companies we work with are asking [us] just solve this [and] build an end-to-end solution. So I’m excited to announce today the launch of Amazon Monotron, which is an end-to-end solution for equipment monitoring,” Jassy said.
The company builds a machine learning model that understands what a normal state looks like, then uses that information to find anomalies and send back information to the team in a mobile app about equipment that needs maintenance now based on the data the model is seeing.
For those companies who may have a more modern system and don’t need the complete package that Monotron offers, Amazon has something for these customers as well. If you have modern sensors, but you don’t have a sophisticated machine learning model, Amazon can ingest this data and apply its machine learning algorithms to find anomalies just as it can with Monotron.
“So we have something for this group of customers as well to announce today, which is the launch of Amazon Lookout for Equipment, which does anomaly detection for industrial machinery,” he said.
In addition, the company announced the Panorama Appliance for companies using cameras at the edge who want to use more sophisticated computer vision, but might not have the most modern equipment to do that. “I’m excited to announce today the launch of the AWS Panorama Appliance which is a new hardware appliance [that allows] organizations to add computer vision to existing on premises smart cameras,” Jassy told AWS re:Invent today.
In addition, it also announced a Panorama SDK to help hardware vendors build smarter cameras based on Panorama.
All of these services are designed to give industrial companies access to sophisticated cloud and machine learning technology at whatever level they may require depending on where they are on the technology journey.
AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.
In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.
The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.
As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.
The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.
With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.
Google today introduced a new mobile management and security solution, Android Enterprise Essentials, which, despite its name, is actually aimed at small to medium-sized businesses. The company explains this solution leverages Google’s experience in building Android Enterprise device management and security tools for larger organizations in order to come up with a simpler solution for those businesses with smaller budgets.
The new service includes the basics in mobile device management, with features that allow smaller businesses to require their employees to use a lock screen and encryption to protect company data. It also prevents users from installing apps outside the Google Play Store via the Google Play Protect service, and allows businesses to remotely wipe all the company data from phones that are lost or stolen.
As Google explains, smaller companies often handle customer data on mobile devices, but many of today’s remote device management solutions are too complex for small business owners, and are often complicated to get up-and-running.
Android Enterprise Essentials attempts to make the overall setup process easier by eliminating the need to manually activate each device. And because the security policies are applied remotely, there’s nothing the employees themselves have to configure on their own phones. Instead, businesses that want to use the new solution will just buy Android devices from a reseller to hand out or ship to employees with policies already in place.
Though primarily aimed at smaller companies, Google notes the solution may work for select larger organizations that want to extend some basic protections to devices that don’t require more advanced management solutions. The new service can also help companies get started with securing their mobile device inventory, before they move up to more sophisticated solutions over time, including those from third-party vendors.
The company has been working to better position Android devices for use in workplace over the past several years, with programs like Android for Work, Android Enterprise Recommended, partnerships focused on ridding the Play Store of malware, advanced device protections for high-risk users, endpoint management solutions, and more.
Google says it will roll out Android Enterprise Essentials initially with distributors Synnex in the U.S. and Tech Data in the U.K. In the future, it will make the service available through additional resellers as it takes the solution global in early 2021. Google will also host an online launch event and demo in January for interested customers.
When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.
At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.
Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.
“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.
That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.
“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”
It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”
AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.
AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.
As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.
Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.
All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.
It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.
AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.
At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.
The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.
For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.
“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.
At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.
It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.
New instances based on these custom chips will launch next year.
The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.
In addition, AWS is also partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training as well. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.
These new chips will make their debut in the AWS cloud in the first half of 2021.
Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.
Trainium, it’s worth noting, will use the same SDK as Inferentia.
“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.”
AWS today opened its re:Invent conference with a surprise announcement: the company is bringing the Mac mini to its cloud. These new EC2 Mac instances, as AWS calls them, are now available in preview. They won’t come cheap, though.
The target audience here — and the only one AWS is targeting for now — is developers who want cloud-based build and testing environments for their Mac and iOS apps. But it’s worth noting that with remote access, you get a fully-featured Mac mini in the cloud, and I’m sure developers will find all kinds of other use cases for this as well.
Given the recent launch of the M1 Mac minis, it’s worth pointing out that the hardware AWS is using — at least for the time being — are i7 machines with six physical and 12 logical cores and 32 GB of memory. Using the Mac’s built-in networking options, AWS connects them to its Nitro System for fast network and storage access. This means you’ll also be able to attach AWS block storage to these instances, for example.
Unsurprisingly, the AWS team is also working on bringing Apple’s new M1 Mac minis into its data centers. The current plan is to roll this out “early next year,” AWS tells me, and definitely within the first half of 2021. Both AWS and Apple believe that the need for Intel-powered machines won’t go away anytime soon, though, especially given that a lot of developers will want to continue to run their tests on Intel machines for the foreseeable future.
David Brown, AWS’s vice president of EC2, tells me that these are completely unmodified Mac minis. AWS only turned off Wi-Fi and Bluetooth. It helps, Brown said, that the minis fit nicely into a 1U rack.
“You can’t really stack them on shelves — you want to put them in some sort of service sled [and] it fits very well into a service sled and then our cards and all the various things we have to worry about, from an integration point of view, fit around it and just plug into the Mac mini through the ports that it provides,” Brown explained. He admitted that this was obviously a new challenge for AWS. The only way to offer this kind of service is to use Apple’s hardware, after all.
It’s also worth noting that AWS is not virtualizing the hardware. What you’re getting here is full access to your own device that you’re not sharing with anybody else. “We wanted to make sure that we support the Mac Mini that you would get if you went to the Apple store and you bought a Mac mini,” Brown said.
Unlike with other EC2 instances, whenever you spin up a new Mac instance, you have to pre-pay for the first 24 hours to get started. After those first 24 hours, prices are by the second, just like with any other instance type AWS offers today.
AWS will charge $1.083 per hour, billed by the second. That’s just under $26 to spin up a machine and run it for 24 hours. That’s quite a lot more than what some of the small Mac mini cloud providers are charging (we’re generally talking about $60 or less per month for their entry-level offerings and around two to three times as much for a comparable i7 machine with 32GB of RAM).
Until now, Mac mini hosting was a small niche in the hosting market, though it has its fair number of players, with the likes of MacStadium, MacinCloud, MacWeb and Mac Mini Vault vying for their share of the market.
With this new offering from AWS, they are now facing a formidable competitor, though they can still compete on price. AWS, however, argues that it can give developers access to all of the additional cloud services in its portfolio, which sets it apart from all of the smaller players.
“The speed that things happen at [other Mac mini cloud providers] and the granularity that you can use those services at is not as fine as you get with a large cloud provider like AWS,” Brown said. “So if you want to launch a machine, it takes a few days to provision and somebody puts a machine in a rack for you and gives you an IP address to get to it and you manage the OS. And normally, you’re paying for at least a month — or a longer period of time to get a discount. What we’ve done is you can literally launch these machines in minutes and have a working machine available to you. If you decide you want 100 of them, 500 of them, you just ask us for that and we’ll make them available. The other thing is the ecosystem. All those other 200-plus AWS services that you’re now able to utilize together with the Mac mini is the other big difference.”
Brown also stressed that Amazon makes it easy for developers to use different machine images, with the company currently offering images for macOS Mojave and Catalina, with Big Sure support coming “at some point in the future.” And developers can obviously create their own images with all of the software they need so they can reuse them whenever they spin up a new machine.
“Pretty much every one of our customers today has some need to support an Apple product and the Apple ecosystem, whether it’s iPhone, iPad or Apple TV, whatever it might be. They’re looking for that bold use case,” Brown said. “And so the problem we’ve really been focused on solving is customers that say, ‘hey, I’ve moved all my server-side workloads to AWS, I’d love to be able to move some of these build workflows, because I still have some Mac minis in a data center or in my office that I have to maintain. I’d love that just to be on AWS.’ ”
AWS’s marquee launch customers for the new service are Intuit, Ring and mobile camera app FiLMiC.
“EC2 Mac instances, with their familiar EC2 interfaces and APIs, have enabled us to seamlessly migrate our existing iOS and macOS build-and-test pipelines to AWS, further improving developer productivity,” said Pratik Wadher, vice president of Product Development at Intuit. “We‘re experiencing up to 30% better performance over our data center infrastructure, thanks to elastic capacity expansion, and a high availability setup leveraging multiple zones. We’re now running around 80% of our production builds on EC2 Mac instances, and are excited to see what the future holds for AWS innovation in this space.”
The new Mac instances are now available in a number of AWS regions. These include US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Singapore), with other regions to follow soon.
DeepMind, the AI technology company that’s part of Google parent Alphabet, has achieved a significant breakthrough in AI-based protein structure prediction. The company announced today that its AlphaFold system has officially solved a protein folding grand challenge that has flummoxed the scientific community for 50 years. The advance inn DeepMind’s AlphaFold capabilities could lead to a significant leap forward in areas like our understanding of disease, as well as future drug discovery and development.
The test that AlphaFold passed essentially shows that the AI can correctly figure out, to a very high degree of accuracy (accurate to within the width of an atom, in fact), the structure of proteins in just days – a very complex task that is crucial to figuring out how diseases can be best treated, as well as solving other big problems like working out how best to break down ecologically dangerous material like toxic waste. You may have heard of ‘Folding@Home,’ the program that allows people to contribute their own home computing (and formerly, game console) processing power to protein folding experiments. That massive global crowdsourcing effort was necessary because using traditional methods, portion folding prediction takes years and is extremely expensive in terms of straight cost, and computing resources.
DeepMind’s approach involves using an “Attentionb-basd neural network system” (basically a neural network that can focus on specific inputs in order to increase efficiency). It’s able to continually refine its own predictive graph of possible protein folding outcomes based on their folding history, and provide highly accurate predictions as a result.
How proteins fold – or go from being a random string of amino acids when originally created, to a complex 3D structure in their final stable form – is key to understanding how diseases are transmitted, as well as how common conditions like allergies work. If you understand the folding process, you can potentially alter it, halting an infection’s progress mid-stride, or conversely, correct mistakes in folding that can lead to neurodegenerative and cognitive disorders.
DeepMind’s technological leap could make accurately predicting these folds a much less time- and resource-consuming process, which could dramatically change the pace at which our understanding of diseases and therapeutics progresses. This could come in handy to address major global threats including future potential pandemics like the COVID-19 crisis we’re currently enduring, by predicting viral protein structures to a high degree of accuracy early in the appearance fo any new future threats like SARS-CoV-2, thus speeding up the development of potential effective treatments and vaccines.
The Supreme Court will hear arguments on Monday in a case that could lead to sweeping changes to America’s controversial computer hacking laws — and affecting how millions use their computers and access online services.
The Computer Fraud and Abuse Act was signed into federal law in 1986 and predates the modern internet as we know it, but governs to this day what constitutes hacking — or “unauthorized” access to a computer or network. The controversial law was designed to prosecute hackers, but has been dubbed as the “worst law” in the technology law books by critics who say it’s outdated and vague language fails to protect good-faith hackers from finding and disclosing security vulnerabilities.
At the center of the case is Nathan Van Buren, a former police sergeant in Georgia. Van Buren used his access to a police license plate database to search for an acquaintance in exchange for cash. Van Buren was caught, and prosecuted on two counts: accepting a kickback for accessing the police database, and violating the CFAA. The first conviction was overturned, but the CFAA conviction was upheld.
Van Buren may have been allowed to access the database by way of his police work, but whether he exceeded his access remains the key legal question.
Orin Kerr, a law professor at the University of California, Berkeley, said Van Buren vs. United States was an “ideal case” for the Supreme Court to take up. “The question couldn’t be presented more cleanly,” he argued in a blog post in April.
The Supreme Court will try to clarify the decades-old law by deciding what the law means by “unauthorized” access. But that’s not a simple answer in itself.
How the Supreme Court will determine what “unauthorized” means is anybody’s guess. The court could define unauthorized access anywhere from violating a site’s terms of service to logging into a system that a person has no user account for.
Pfefferkorn said a broad reading of the CFAA could criminalize anything from lying on a dating profile, sharing the password to a streaming service, or using a work computer for personal use in violation of an employer’s policies.
But the Supreme Court’s eventual ruling could also have broad ramifications on good-faith hackers and security researchers, who purposefully break systems in order to make them more secure. Hackers and security researchers have for decades operated in a legal grey area because the law as written exposes their work to prosecution, even if the goal is to improve cybersecurity.
Tech companies have for years encouraged hackers to privately reach out with security bugs. In return, the companies fix their systems and pay the hackers for their work. Mozilla, Dropbox, and Tesla are among the few companies that have gone a step further by promising not to sue good-faith hackers under the CFAA. Not all companies welcome the scrutiny and bucked the trend by threatening to sue researchers over their findings, and in some cases actively launching legal action to prevent unflattering headlines.
Security researchers are no stranger to legal threats, but a decision by the Supreme Court that rules against Van Buren could have a chilling effect on their work, and drive vulnerability disclosure underground.
“If there are potential criminal (and civil) consequences for violating a computerized system’s usage policy, that would empower the owners of such systems to prohibit bona fide security research and to silence researchers from disclosing any vulnerabilities they find in those systems,” said Pfefferkorn. “Even inadvertently coloring outside the lines of a set of bug bounty rules could expose a researcher to liability.”
“The Court now has the chance to resolve the ambiguity over the law’s scope and make it safer for security researchers to do their badly-needed work by narrowly construing the CFAA,” said Pfefferkorn. “We can ill afford to scare off people who want to improve cybersecurity.”
The Supreme Court will likely rule on the case later this year, or early next.
Amazon Web Services is currently having an outage, taking a large swathes of the internet down with it.
Several AWS services are down at the as of early Wednesday, according to its status pages. That means any app, site or service that relies on AWS might also be down, too. (As I found out the hard way when my Roomba refused to connect to the app.)
Amazon says the issue is largely localized to North America. The company didn’t give a reason for the outage, only that it was experiencing increased error rates and that it was working on a resolution.
So far a number of companies that rely on AWS have tweeted out that they’re experiencing issues as a result, including Adobe and Roku. We’ll keep you updated as this outage continues.
An Amazon AWS outage is currently impacting Adobe Spark so you may be having issues accessing/editing your projects. We are actively working with AWS and will report when the issue has subsided. https://t.co/uoHPf44HjL for current Spark status. We apologize for any inconvenience!
— Adobe Spark (@AdobeSpark) November 25, 2020
We are working to resolve this quickly. We are impacted by the widespread AWS outage and hope to get our customers up and running soon. Most streaming should work as expected during this time.
— Roku Support (@RokuSupport) November 25, 2020
We do apologize for the inconvenience! Unfortunately, the issue is stemming from an AWS server outage, which is affecting many companies. We hope that the issue is resolved soon!
— Shipt (@Shipt) November 25, 2020
Data platform Splunk continues to make acquisitions as it works to build out its recently launched observability platform. After acquiring Plumbr and Rigor last month, the company today announced that it has acquired Flowmill, a Palo Alto-based network observability startup. Flowmill focuses on helping its users find network performance issues in their cloud infrastructure in real time and measure their traffic by service to help them control cost.
Like so many other companies in this space now, Flowmill utilizes eBPF, the Linux kernel’s relatively new capability to run sandboxed code inside it without having to change the kernel or load kernel modules. That makes it ideal for monitoring applications.
“Observability technology is rapidly increasing in both sophistication and ability to help organizations revolutionize how they monitor their infrastructure and applications. Flowmill’s innovative NPM solution provides real-time observability into network behavior and performance of distributed cloud applications, leveraging extended Berkeley Packet Filter (eBPF) technologies,” said Tim Tully, Splunk’s chief technology officer. “We’re excited to bring Flowmill’s visionary NPM technology into our Observability Suite as Splunk continues to deliver best-in-class observability capabilities to our customers.”
While Spunk has made some larger acquisitions, including its $1.05 billion purchase of SignalFx, it’s building out its observability platform by picking up small startups that offer very specific capabilities. It could probably build all of these features in-house, but the company clearly believes that it has to move fast to get a foothold in this growing market as enterprises look for new observability tools as they modernize their tech stacks.
“Flowmill’s approach to building systems that support full-fidelity, real-time, high-cardinality ingestions and analysis aligns well with Splunk’s vision for observability,” said Flowmill CEO Jonathan Perry. “We’re thrilled to join Splunk and bring eBPF, next-generation NPM to the Splunk Observability Suite.”
The companies didn’t disclose the purchase price, but Flowmill previously raised funding from Amplify, Felicis Ventures, WestWave Capital and UpWest.
Google has teamed up with Disney and Lucasfilm to bring the Star Wars streaming series “The Mandalorian” to augmented reality. The company announced this morning the launch of a new Android AR app, “The Mandalorian” AR Experience, which will display iconic moments from the first season of the show in AR, allowing fans to retrace the Mandalorian’s steps, find the Child, harness the Force, and more, according to the app’s Play Store description.
In the app, users will be able to follow the trail of Mando, Din Djarin and the Child, interact with the characters, and create scenes that can be shared with friends.
New AR content will be released for the app on Mondays, starting today Nov. 23 and continuing for nearly a year to wrap on Oct. 31, 2021. That makes this a longer-term promotion than some of the other Star Wars experiences Google has offered in the past.
Image Credits: Google/Lucasfilm
Meanwhile, the app itself takes advantage of Google’s developer platform for building augmented reality experiences, ARCore, in order to create scenes that interact with the user’s surroundings. This more immersive design means fans will be able to unlock additional effects based on their actions. The app also leverages Google’s new ARCore Depth API, which allows the app to enable occlusion. This makes the AR scenes blend more naturally with the environment that’s seen through the smartphone’s camera.
However, because the app is a showcase for Google’s latest AR technologies, it won’t work with all Android devices.
Google says the app will only support “compatible 5G Android devices,” which includes its 5G Google Pixel smartphones and other select 5G Android phones that have the Google Play Services for AR updated. You can check to see if your Android phone is supported on a list provided on the Google Developers website. Other phones may be supported in the future, the company also notes.
While the experience requires a 5G-capable Android device, Google says that you don’t have to be on an active 5G connection to use the app. Instead, the requirement is more about the technologies these devices include and not the signal itself.
Google has teamed up with Lucasfilm many times over the past several years for promotional marketing campaigns. These are not typically considered ads, because they give both companies the opportunity to showcase their services or technologies. For example, Google allowed users to give its apps a Star Wars-themed makeover back in 2015, which benefited its own services like Gmail, Maps, YouTube, Chrome and others. It has also introduced both AR and VR experiences featuring Star Wars content over the past several years.
The “The Mandalorian” AR Experience” is a free download on the Play Store.
Twitter is the latest social media site to allow users to experiment with posting disappearing content. Fleets, as Twitter calls them, allows its mobile users post short stories, like photos or videos with overlaying text, that are set to vanish after 24 hours.
But a bug meant that fleets weren’t deleting properly and could still be accessed long after 24 hours had expired. Details of the bug were posted in a series of tweets on Saturday, less than a week after the feature launched.
full disclosure: scraping fleets from public accounts without triggering the read notification
the endpoint is: https://t.co/332FH7TEmN
— cathode gay tube (@donk_enby) November 20, 2020
The bug effectively allowed anyone to access and download a user’s fleets without triggering a notification that the user’s fleet had been read and by whom. The implication is that this bug could be abused to archive a user’s fleets after they expire.
Using an app that’s designed to interact with Twitter’s back-end systems via its developer API. What returned was a list of fleets from the server. Each fleet had its own direct URL, which when opened in a browser would load the fleet as an image or a video. But even after the 24 hours elapsed, the server would still return links to fleets that had already disappeared from view in the Twitter app.
When reached, a Twitter spokesperson said a fix was on the way. “We’re aware of a bug accessible through a technical workaround where some Fleets media URLs may be accessible after 24 hours. We are working on a fix that should be rolled out shortly.”
Twitter acknowledged that the fix means that fleets should now expire properly, it said it won’t delete the fleet from its servers for up to 30 days — and that it may hold onto fleets for longer if they violate its rules. We checked that we could still load fleets from their direct URLs even after they expire.
Fleet with caution.
Autodesk, the U.S. publicly listed software and services company that targets engineering and design industries, acquired Norway’s Spacemaker this week. The startup has developed AI-supported software for urban development, something Autodesk CEO Andrew Anagnost broadly calls generative design.
The price of the acquisition is $240 million in a mostly all-cash deal. Spacemaker’s VC backers included European firms Atomico and Northzone, which co-led the company’s $25 million Series A round in 2019. Other investors on the cap table include Nordic real estate innovator NREP, Nordic property developer OBOS, U.K. real estate technology fund Round Hill Ventures and Norway’s Construct Venture.
In an interview with TechCrunch, Anagnost shared more on Autodesk’s strategy since it transformed into a cloud-first company and what attracted him to the 115-person Spacemaker team. We also delved more into Spacemaker’s mission to augment the work of humans and not only speed up the urban development design and planning process but also improve outcomes, including around sustainability and quality of life for the people who will ultimately live in the resulting spaces.
I also asked if Spacemaker sold out too early? And why did U.S.-headquartered Autodesk acquire a startup based in Norway over numerous competitors closer to home? What follows is a transcript of our Zoom call, lightly edited for length and clarity.
TechCrunch: Let’s start high-level. What is the strategy behind Autodesk acquiring Spacemaker?
Andrew Anagnost: I think Autodesk, for a while … has had a very clearly stated strategy about using the power of the cloud; cheap compute in the cloud and machine learning/artificial intelligence to kind of evolve and change the way people design things. This is something strategically we’ve been working toward for quite a while both with the products we make internally, with the capabilities we roll out that are more cutting edge and with also our initiative when we look at companies we’re interested in acquiring.
As you probably know, Spacemaker really stands out in terms of our space, the architecture space, and the engineering and owner space, in terms of applying cloud computing, artificial intelligence, data science, to really helping people explore multiple options and come up with better decisions. So it’s completely in line with the strategy that we had. We’ve been looking at them for over a year in terms of whether or not they were the right kind of company for us.
Culturally, they’re the right company. Vision and strategy-wise, they’re the right company. Also, talent-wise, they’re the right company, They really do stand out. They’ve built a real, practical, usable application that helps a segment of our population use machine learning to really create better outcomes in a critical area, which is urban redevelopment and development.
So it’s totally aligned with what we’re trying to do. It’s not only a platform for the product they do today — they have a great product that’s getting increasing adoption — but we also see the team playing an important role in the future of where we’re taking our applications. We actually see what Spacemaker has done reaching closer and closer to what Revit does [an existing Autodesk product]. Having those two applications collaborate more closely together to evolve the way people assess not only these urban planning designs that they’re focused on right now, but also in the future, other types of building projects and building analysis and building option exploration.
How did you discover Spacemaker? I mean, I’m guessing you probably looked at other companies in the space.
We’ve been watching this space for a while; the application that Spacemaker has built we would characterize it, from our terminology, as generative design for urban planning, meaning the machine generating options and option explorations for urban planning type applications, and it overlaps both architecture and owners.
For the past year and a half, Google has been rolling out its next-generation messaging to Android users to replace the old, clunky, and insecure SMS text messaging. Now the company says that rollout is complete, and plans to bring end-to-end encryption to Android messages next year.
Google’s Rich Communications Services is Android’s answer to Apple’s iMessage, and brings typing indicators, read receipts, and you’d expect from most messaging apps these days.
In a blog post Thursday, Google said it plans to roll out end-to-end encryption — starting with one-on-one conversations — leaving open the possibility of end-to-end encrypted group chats. It’ll become available to beta testers, who can sign up here, beginning later in November and continue into the new year.
End-to-end encryption prevents anyone — even Google — from reading messages as they travel between sender and the recipient.
Google dipped its toes into the end-to-end encrypted messaging space in 2016 with the launch of Allo, an app that immediately drew criticism from security experts for not enabling the security feature by default. Two years later, Google killed off the project altogether.
This time around, Google learned its lesson. Android messages will default to end-to-end encryption once the feature becomes available, and won’t revert back to SMS unless the users in the conversation loses or disables RCS.
Google is launching a major redesign of its Google Pay app on both Android and iOS today. Like similar phone-based contactless payment services, Google Pay — or Android Pay as it was known then — started out as a basic replacement for your credit card. Over time, the company added a few more features on top of that but the overall focus never really changed. After about five years in the market, Google Pay now has about 150 million users in 30 countries. With today’s update and redesign, Google is keeping all the core features intact but also taking the service in a new direction with a strong emphasis on helping you manage your personal finances (and maybe get a deal here and there as well).
Google is also partnering with 11 banks to launch a new kind of bank account in 2021. Called Plex, these mobile-first bank accounts will have no monthly fees, overdraft charges or minimum balances. The banks will own the accounts but the Google Pay app will be the main conduit for managing these accounts. The launch partners for this are Citi and Stanford Federal Credit Union.
“What we’re doing in this new Google Pay app, think of it is combining three things into one,” Google director of product management Josh Woodward said as he walked me through a demo of the new app. “The three things are three tabs in the app. One is the ability to pay friends and businesses really fast. The second is to explore offers and rewards, so you can save money at shops. And the third is getting insights about your spending so you can stay on top of your money.”
Paying friends and businesses was obviously always at the core of Google Pay — but the emphasis here has shifted a bit. “You’ll notice that everything in the product is built around your relationships,” Caesar Sengupta, Google’s lead for Payments and Next Billion Users, told me. “It’s not about long lists of transactions or weird numbers. All your engagements pivot around people, groups, and businesses.”
It’s maybe no surprise then that the feature that’s now front and center in the app is P2P payments. You can also still pay and request money through the app as usual, but as part of this overhaul, Google is now making it easier to split restaurant bills with friends, for example, or your rent and utilities with your roommates — and to see who already paid and who is still delinquent. Woodward tells me that Google built this feature after its user research showed that splitting bills remains a major pain point for its users.
In this same view, you can also find a list of companies you have recently transacted with — either by using the Google Pay tap-and-pay feature or because you’ve linked your credit card or bank account with the service. From there, you can see all of your recent transactions with those companies.
Maybe the most important new feature Google is enabling with this update is indeed the ability to connect your bank accounts and credit cards to Google Pay so that it can pull in information about your spending. It’s basically Mint-light inside the Google Pay app. This is what enables the company to offer a lot of the other new features in the app. Google says it is working with “a few different aggregators” to enable this feature, though it didn’t go into details about who its partners are. It’s worth stressing that this, like all of the new features here, is off by default and opt-in.
The basic idea here is similar to that of other personal finance aggregators. At its most basic, it lets you see how much money you spent and how much you still have. But Google is also using its smarts to show you some interesting insights into your spending habits. On Monday, it’ll show you how much you spent on the weekend, for example.
“Think of these almost as like stories in a way,” Woodward said. “You can swipe through them so you can see your large transactions. You can see how much you spent this week compared to a typical week. You can look at how much money you’ve sent to friends and which friends and where you’ve spent money in the month of November, for example.”
This also then enables you to easily search for a given transaction using Google’s search capabilities. Since this is Google, that search should work pretty well and in a demo, the team showed me how a search for ‘Turkish’ brought up a transaction at a kebab restaurant, for example, even though it didn’t have ‘Turkish’ in its name. If you regularly take photos of your receipts, you can also now search through these from Google Pay and drill down to specific things you bought — as well as receipts and bills you receive in your Gmail inbox.
Also new inside of Google Pay is the ability to see and virtually clip coupons that are then linked to your credit card, so you don’t need to do anything else beyond using that linked credit card to get extra cashback on a given transaction, for example. If you opt in, these offers can also be personalized.
The team also worked with the Google Lens team to now let you scan products and QR codes to look for potential discounts.
As for the core payments function, Google is also enabling a new capability that will let you use contactless payments at 30,000 gas stations now (often with a discount). The partners for this are Shell, ExxonMobil, Phillips 66, 76 and Conoco.
In addition, you’ll also soon be able to pay for parking in over 400 cities inside the app. Not every city is Portland, after all, and has a Parking Kitty. The first cities to get this feature are Austin, Boston, Minneapolis, and Washington, D.C., with others to follow soon.
It’s one thing to let Google handle your credit card transaction but it’s another to give it all of this — often highly personal — data. As the team emphasized throughout my conversation with them, Google Pay will not sell your data to third parties or even the rest of Google for ad targeting, for example. All of the personalized features are also off by default and the team is doing something new here by letting you turn them on for a three-month trial period. After those three months, you can then decide to keep them on or off.
In the end, whether you want to use the optional features and have Google store all of this data is probably a personal choice and not everybody will be comfortable with it. The rest of the core Google Pay features aren’t changing, after all, so you can still make your NFC payments at the supermarket with your phone just like before.