By the end of the year, more than 10 car models from Volvo, GM, Renault and Polestar will be powered by the Android Automotive operating system — and all of the built-in Google apps and services that come with it. Now, the company is making it easier for third-party developers to bring their navigation, EV charging, parking and media apps directly to a car’s screen.
Google announced Tuesday at its annual developer conference that its extending its Android for Cars App Library, which is available as part of Jetpack, to support the Android Automotive operating system. This is good news for developers who can now create an app that is compatible with two different, but sometimes overlapping platforms: Android OS and Android Auto. It also means developers can create one app that should work seamlessly between various makes and models of vehicles.
Google said Tuesday it is already working with Early Access Partners, including Parkwhiz, Plugshare, Sygic, ChargePoint, Flitsmeister, SpotHero and others to bring apps in these categories to cars powered by Android Automotive OS.
Android Automotive OS shouldn’t be confused with Android Auto, which is a secondary interface that lies on top of an operating system. Android Auto is an app that runs on the user’s phone and wirelessly communicates with the vehicle’s infotainment system. Meanwhile, Android Automotive OS is modeled after its open-source mobile operating system that runs on Linux. But instead of running smartphones and tablets, Google modified it so automakers could use in their cars. Google has offered an open source version of this OS to automakers for sometime. But in recent years automakers have worked with the tech company to natively build in an Android OS that is embedded with all the Google apps and services such as Google Assistant, Google Maps and the Google Play Store.
Many third-party developers like Spotify have used the Android for Cars App Library to create and and publish their Android Auto apps to the Play Store. By extending the Cars App to the operating system, developers will only need to build once.
Two years ago, Google opened its Android Automotive operating system up to third-party developers to bring music and other entertainment apps into vehicle infotainment systems. Polestar 2, the all-electric vehicle developed by Volvo’s standalone electric performance brand, was the first. And more have followed, including the Volvo XC40 Recharge.
Companies interested in participating in the early access program will have to fill out this interest form, according to Google.
This year’s I/O event from Google was heavy on the “we’re building something cool” and light on the “here’s something you can use or buy tomorrow.” But there were also some interesting surprises from the semi-live event held in and around the company’s Mountain View campus. Read on for all the interesting bits.
We’ve known Android 12 was on its way for months, but today was our first real look at the next big change for the world’s most popular operating system. A new look, called Material You (yes), focuses on users, apps, and things like time of day or weather to change the UI’s colors and other aspects dynamically. Some security features like new camera and microphone use indicators are coming, as well as some “private compute core” features that use AI processes on your phone to customize replies and notifications. There’s a beta out today for the adventurous!
Subhed says it all (but read more here). Up from 2 billion in 2017.
Millions of people and businesses use Google’s suite of productivity and collaboration tools, but the company felt it would be better if they weren’t so isolated. Now with Smart Canvas you can have a video call as you work on a shared doc together and bring in information and content from your Drive and elsewhere. Looks complicated, but potentially convenient.
It’s a little too easy to stump AIs if you go off script, asking something in a way that to you seems normal but to the language model is totally incomprehensible. Google’s LaMDA is a new natural language processing technique that makes conversations with AI models more resilient to unusual or unexpected queries, making it more like a real person and less like a voice interface for a search function. They demonstrated it by showing conversations with anthropomorphized versions of Pluto and a paper airplane. And yes, it was exactly as weird as it sounds.
One of the most surprising things at the keynote had to be Project Starline, a high-tech 3D video call setup that uses Google’s previous research and Lytro DNA to show realistic 3D avatars of people on both sides of the system. It’s still experimental but looks very promising.
Few people want to watch a movie on their smartwatch, but lots of people like to use it to track their steps, meditation, and other health-related practices. Wear OS is getting a bunch of Fitbit DNA infused, with integrated health tracking stuff and a lot of third party apps like Calm and Flo.
These two mobile giants have been fast friends in the phone world for years, but when it comes to wearables, they’ve remained rivals. In the face of Apple’s utter dominance in the smartwatch space, however, the two have put aside their differences and announced they’ll work on a “unified platform” so developers can make apps that work on both Tizen and Wear OS.
Apparently Google and Samsung realized that no one is going to buy foldable devices unless they do some really cool things, and that collaboration is the best way forward there. So the two companies will also be working together to improve how folding screens interact with Android.
The smart TV space is a competitive one, and after a few starts Google has really made it happen with Android TV, which the company announced had reached 80 million monthly active devices — putting it, Roku, and Amazon (the latter two with around 50 million monthly active accounts) all in the same league. The company also showed off a powerful new phone-based remote app that will (among other things) make putting in passwords way better than using the d-pad on the clicker. Developers will be glad to hear there’s a new Google TV emulator and Firebase Test Lab will have Android TV support.
Well, assuming you have a really new Android device with a UWB chip in it. Google is working with BMW first, and other automakers soon most likely, to make a new method for unlocking the car when you get near it, or exchanging basic commands without the use of a fob or Bluetooth. Why not Bluetooth you ask? Well, Bluetooth is old. UWB is new.
Google and its sibling companies are both leaders in AI research and popular platforms for others to do their own AI work. But its machine learning development tools have been a bit scattershot — useful but disconnected. Vertex is a new development platform for enterprise AI that puts many of these tools in one place and integrates closely with optional services and standards.
Google does a lot of machine learning stuff. Like, a LOT a lot. So they are constantly working to make better, more efficient computing hardware to handle the massive processing load these AI systems create. TPUv4 is the latest, twice as fast as the old ones, and will soon be packaged into 4,096-strong pods. Why 4,096 and not an even 4,000? The same reason any other number exists in computing: powers of 2.
Google Photos is a great service, and the company is trying to leverage the huge library of shots most users have to find patterns like “selfies with the family on the couch” and “traveling with my lucky hat” as fun ways to dive back into the archives. Great! But they’re also taking two photos taken a second apart and having an AI hallucinate what comes between them, leading to a truly weird looking form of motion that shoots deep, deep into the uncanny valley, from which hopefully it shall never emerge.
Google’s “AI makes a hair appointment for you” service Duplex didn’t exactly set the world on fire, but the company has found a new way to apply it. If you forget your password, Duplex will automatically fill in your old password, pick a new one and let you copy it before submitting it to the site, all by interacting with the website’s normal reset interface. It’s only going to work on Twitter and a handful of other sites via Chrome for now, but hey, if it happens to you a lot, maybe it’ll save you some trouble.
The aged among our readers may remember Froogle, Google’s ill-fated shopping interface. Well, it’s back… kind of. The plan is to include lots of product information, from price to star rating, availability and other info, right in the Google interface when you search for something. It sucks up this information from retail sites, including whether you have something in your cart there. How all this benefits anyone more than Google is hard to imagine, but naturally they’re positioning it as wins all around. Especially for new partner Shopify. (Me, I use DuckDuckGo.)
A lot of developers have embraced Google’s Flutter cross-platform UI toolkit. The latest version, announced today, adds some safety settings, performance improvements, and workflow updates. There’s lots more coming, too.
Popular developer platform Firebase got a bunch of new and updated features as well. Remote Config gets a nice update allowing developers to customize the app experience to individual user types, and App Check provides a basic level of security against external threats. There’s plenty here for devs to chew on.
The beta for the next version of Google’s Android Studio environment is coming soon, and it’s called Arctic Fox. It’s got a brand new UI building toolkit called Jetpack Compose, and a bunch of accessibility testing built in to help developers make their apps more accessible to people with disabilities. Connecting to devices to test on them should be way easier now too. Oh, and there’s going to be a version of Android Studio for Apple Silicon.
At its I/O developer conference, Google today announced a slew of updates to its Firebase developer platform, which, as the company also announced, now powers over 3 million apps.
There’s a number of major updates here, most of which center around improving existing tools like Firebase Remote Config and Firebase’s monitoring capabilities, but there are also a number of completely new features here as well, including the ability to create Android App Bundles and a new security tool called App Check.
“Helping developers be successful is what makes Firebase successful,” Firebase product manager Kristen Richards told me ahead of today’s announcements. “So we put helpfulness and helping developers at the center of everything that we do.” She noted that during the pandemic, Google saw a lot of people who started to focus on app development — both as learners and as professional developers. But the team also saw a lot of enterprises move to its platform as those companies looked to quickly bring new apps online.
Maybe the marquee Firebase announcement at I/O is the updated Remote Config. That’s always been a very powerful feature that allows developers to make changes to live production apps on the go without having to release a new version of their app. Developers can use this for anything from A/B testing to providing tailored in-app experience to specific user groups.
With this update, Google is introducing updates to the Remote Config console, to make it easier for developers to see how they are using this tool, as well as an updated publish flow and redesigned test results pages for A/B tests.
What’s most important, though, is that Google is taking Remote Config a step further now by launching a new Personalization feature that helps developers automatically optimize the user experience for individual users. “It’s a new feature of [Remote Config] that uses Google’s machine learning to create unique individual app experiences,” Richards explained. “It’s super simple to set up and it automatically creates these personalized experiences that’s tailored to each individual user. Maybe you have something that you would like, which would be something different for me. In that way, we’re able to get a tailored experience, which is really what customers expect nowadays. I think we’re all expecting things to be more personalized than they have in the past.”
Google is also improving a number of Firebase’s analytics and monitoring capabilities, including its Crashlytics service for figuring out app crashes. For game developers, that means improved support for games written with the help of the Unity platform, for example, but for all developers, the fact that Firebase’s Performance Monitoring service now processes data in real time is a major update to having performance data (especially on launch day) arrive with a delay of almost half a day.
Firebase is also now finally adding support for Android App Bundles, Google’s relatively new format for packaging up all of an app’s code and resources, with Google Play optimizing the actual APK with the right resources for the kind of device the app gets installed on. This typically leads to smaller downloads and faster installs.
On the security side, the Firebase team is launching App Check, now available in beta. App Check helps developers guard their apps against outside threats and is meant to automatically block any traffic to online resources like Cloud Storage, Realtime Database and Cloud Functions for Firebase (with others coming soon) that doesn’t provide valid credentials.
The other update worth mentioning here is to Firebase Extensions, which launched a while ago, but which is getting support for a few more extensions today. These are new extensions from Algolia, Mailchimp and MessageBird, that helps bring new features like Algolia’s search capabilities or MessageBird’s communications features directly to the platform. Google itself is also launching a new extension that helps developers detect comments that could be considered “rude, disrespectful, or unreasonable in a way that will make people leave a conversation.”
At Google I/O today Google Cloud announced Vertex AI, a new managed machine learning platform that is meant to make it easier for developers to deploy and maintain their AI models. It’s a bit of an odd announcement at I/O, which tends to focus on mobile and web developers and doesn’t traditionally feature a lot of Google Cloud news, but the fact that Google decided to announce Vertex today goes to show how important it thinks this new service is for a wide range of developers.
The launch of Vertex is the result of quite a bit of introspection by the Google Cloud team. “Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”
Wiley, who was also the general manager of AWS’s SageMaker AI service from 2016 to 2018 before coming to Google in 2019, noted that Google and others who were able to make machine learning work for themselves saw how it can have a transformational impact, but he also noted that the way the big clouds started offering these services was by launching dozens of services, “many of which were dead ends,” according to him (including some of Google’s own). “Ultimately, our goal with Vertex is to reduce the time to ROI for these enterprises, to make sure that they can not just build a model but get real value from the models they’re building.”
Vertex then is meant to be a very flexible platform that allows developers and data scientist across skill levels to quickly train models. Google says it takes about 80% fewer lines of code to train a model versus some of its competitors, for example, and then help them manage the entire lifecycle of these models.
The service is also integrated with Vizier, Google’s AI optimizer that can automatically tune hyperparameters in machine learning models. This greatly reduces the time it takes to tune a model and allows engineers to run more experiments and do so faster.
Vertex also offers a “Feature Store” that helps its users serve, share and reuse the machine learning features and Vertex Experiments to help them accelerate the deployment of their models into producing with faster model selection.
Deployment is backed by a continuous monitoring service and Vertex Pipelines, a rebrand of Google Cloud’s AI Platform Pipelines that helps teams manage the workflows involved in preparing and analyzing data for the models, train them, evaluate them and deploy them to production.
To give a wide variety of developers the right entry points, the service provides three interfaces: a drag-and-drop tool, notebooks for advanced users and — and this may be a bit of a surprise — BigQuery ML, Google’s tool for using standard SQL queries to create and execute machine learning models in its BigQuery data warehouse.
“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” said Andrew Moore, vice president and general manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”
Things have been a bit quiet on the foldables front of late, but plenty of parties are still bullish about the form factor’s future. Ahead of today’s big I/O kickoff, Samsung (undoubtedly the most bullish of the bunch) posted a bunch of metrics this morning, noting,
The global outlook is just as impressive. This year alone, the foldables market is expected to triple over last year — a year in which Samsung accounted for three out of every four foldable smartphones shipped worldwide.
Part of anticipating growth in the category is ensuring that the software is ready it. Samsung has been tweaking things for a while now on its end, and at I/O in 2018, Google announced that it would be adding support for foldable screens. Recent rumors have suggested that the company is working on its own foldable Pixel, but even beyond that, it’s probably in the company’s best interest to ensure that Android plays nicely with the form factor.
“We studied how people interact with large screens,” the company said in today’s developer keynote. This includes a variety of different aspects, including where users place their hands while using the device — which can be a bit all over the place when dealing with different applications in different orientations and form factors. Essentially, you don’t want to, say, put buttons where people generally place your hands.
The list of upgrades includes the ability to resize content automatically, without overly stretching it out to fit multiple panels. All of this is no doubt going to be a learning curve as foldables end up in the hands of more users. But at very least, it signals Google’s continued view of foldables as a growing category. It’s also one of multiple updates today that involve the company working more closely with Samsung.
The two tech giants also announced a joint Wear OS/Tizen play early today.
Flutter, Google’s cross-platform UI toolkit for building mobile and desktop apps, is getting a small but important update at the company’s I/O conference today. Google also announced that Flutter now powers 200,000 apps in the Play Store alone, including popular apps from companies like WeChat, ByteDance, BMW, Grab and DiDi. Indeed, Google notes that 1 in 8 new apps in the Play Store are now Flutter apps.
The launch of Flutter 2.2 follows Google’s rollout of Flutter 2, which first added support for desktop and web apps in March, so it’s no surprise that this is a relatively minor release. In many ways, the update builds on top of the features the company introduced in version 2 and reliability and performance improvements.
Version 2.2 makes null safety the default for new projects, for example, to add protections against null reference exceptions. As for performance, web apps can now use background caching using service workers, for example, while Android apps can use deferred components and iOS apps get support for precompiled shaders to make first runs smoother.
Google also worked on streamlining the overall process of bringing Flutter apps to desktop platforms (Windows, macOS and Linux).
But as Google notes, a lot of the work right now is happening in the ecosystem. Google itself is introducing a new payment plugin for Flutter built in partnership with the Google Pay team and Google’s ads SDK for Flutter is getting support for adaptive banner formats. Meanwhile, Samsung is now porting Flutter to Tizen and Sony is leading an effort to bring it to embedded Linux. Adobe recently announced its XD to Flutter plugin for its design tool and Microsoft today launched the alpha of Flutter support for Universal Windows Platform (UWP) apps for Windows 10 in alpha.
At its I/O developer conference, Google today announced the first beta of the next version of its Android Studio IDE, Arctic Fox. For the most part, the idea here is to bring more of the tooling around building Android apps directly into the IDE.
While there is a lot that’s new in Arctic Fox, maybe the marquee feature of this update is the integration of Jetpack Compose, Google’s toolkit for building modern user interfaces for Android. In Android Studio, developers can now use Compose Preview to create previews of different configurations (think themes and devices) or deploy a preview directly to a device, all while the layout inspector makes it easier for developers to understand how (and why) a layout is rendered the way it is. With Live Updates enabled any change is then also directly streamed to the device.
The team also integrated the Android Accessibility Test Framework directly into Android Studio to help developers find accessibility issues like missing content descriptions or a low contrast in their designs.
Just like with some of the updates to Android itself, the team is also looking at making it easier to develop for a wider range of form factors. To build Wear OS apps, developers previously had to physically connect the watch to their development machine or go through a lot of steps to pair the watch. Now, users can simply pair a watch and phone emulator (or physical phone) with the new Wear OS Pairing feature. All this takes now is a few clicks.
Also new on the Wear OS side is a new heart rate sensor for the Wear OS Emulators in Android Studio, while the Android Automotive emulator gains the ability to replay car sensor data to help those developers with their development and testing workflow.
Android Studio users who work on a Mac will be happy to hear that Google is also launching a first preview of Android Studio for the Apple Silicon (arm64) architecture.
Google is working on a video calling booth that uses 3D imagery on a 3D display to create a lifelike image of the people on both sides. While it’s still experimental, “Project Starline” builds on years of research and acquisitions, and could be the core of a more personal-feeling video meeting in the near future.
The system was only shown via video of unsuspecting participants, who were asked to enter a room with a heavily obscured screen and camera setup. Then the screen lit up with a video feed of a loved one, but in a way none of them expected:
“I could feel her and see her, it was like this 3D experience. It was like she was here.”
“I felt like I could really touch him!”
“It really, really felt like she and I were in the same room.”
CEO Sundar Pichai explained that this “experience” was made possible with high-resolution cameras and custom depth sensors, almost certainly related to these Google research projects into essentially converting videos of people and locations into interactive 3D scenes:
The cameras and sensors — probably a dozen or more hidden around the display — capture the person from multiple angles and figure out their exact shape, creating a live 3D model of them. This model and all the color and lighting information is then (after a lot of compression and processing) sent to the other person’s setup, which shows it in convincing 3D. It even tracks their heads and bodies to adjust the image to their perspective.
But 3D TVs have more or less fallen by the wayside; turns out no one wants to wear special glasses for hours at a time, and the quality on glasses-free 3D was generally pretty bad. So what’s making this special 3D image?
Pichai said “we have developed a breakthrough light field display,” probably with the help of the people and IP it scooped up from Lytro, the light field camera company that didn’t manage to get its own tech off the ground and dissolved in 2018.
Light field cameras and displays create and show 3D imagery using a variety of techniques that are very difficult to explain or show in 2D. The startup Looking Glass has made several that are extremely arresting to view in person, showing 3D models and photographic scenes that truly look like tiny holograms.
Whether Google’s approach is similar or different, the effect appears to be equally impressive, as the participants indicate. They’ve been testing this internally and are getting ready to send out units to partners in various industries (such as medicine) where the feeling of a person’s presence makes a big difference.
At this point Project Starline is still very much a prototype, and probably a ridiculously expensive one — so don’t expect to get one in your home any time soon. But it’s not wild to think that a consumer version of this light field setup may be available down the line. Google promises to share more later this year.
Google today announced it’s partnering with Shopify, giving the e-commerce platform’s more than 1.7 million merchants the ability to reach consumers through Google Search and its other services. The integration will allow merchants to sign up in just a few clicks to have their products appear across Google’s 1 billion “shopping journeys” that take place every day through Search, Maps, Images, Lens and YouTube.
The company didn’t offer extensive details about the integration when it was announced during Google’s I/O Developer event this afternoon. But the news follows a series of updates to Google Shopping resulting from Amazon’s increased investment in its own advertising business, which threatens Google’s core ads business.
Google made its pitch to online advertisers today, describing how its so-called “Shopping Graph” would now begin to pull together information from across websites, price reviews, videos and product data pulled directly from brands and retailers, to help better inform online shoppers about where to find items, how well they were received, which merchant has the best price, and more.
This Shopping Graph can span across Google’s platforms, whether someone is discovering products through Google Search or even watching videos on YouTube, among other things.
Image Credits: Google I/O 2021
For example, when you now view screenshots of products in Google Photos, there will be a suggestion to search the photo using Google Lens, to help you find the item for sale. And Google announced earlier this year it was pilot-testing a new experience on YouTube that allows users to shop products they learn about from their favorite creators — a move to counteract the growing threats from TikTok and Facebook, and their own investments in e-commerce.
But before any of this Shopping Graph functionality can really work, Google needs consumers to find shopping for products via Google actually useful. That’s partly why Google made it free for merchants to sell their products across Google this past year — a change that Google says drove an 80% increase in merchants on Google, with the “vast majority” being small to medium-sized businesses.
That’s where the partnership with Shopify comes in, too. Though this integration doesn’t mean that every Shopify storefront will be included on Google — the merchants have to take an action to make that happen — it would be almost a no-brainer for them not to leverage the new option.
Shopify isn’t playing favorites when it comes to distribution, however. It’s integrated with other large platforms, too, including Facebook and TikTok. And it’s been working with Walmart to expand the retailer’s online marketplace, as well.
Investors seemed happy with the Shopify news this afternoon. Shortly after Google’s announcement, the stock popped 3.52%.
Google is working with BMW and other automakers to develop a digital key that will let car owners lock, unlock and start a vehicle from their Android smartphone, the company announced Tuesday during its 2021 Google I/O developer event.
The digital key is one many new features coming to Android 12, the latest version of the company’s mobile operating system. The digital car keys will become available on select Pixel and Samsung Galaxy phones later this year, according to Sameer Samat, VP of PM for Android & Google Play. The digital car key will be available in yet unnamed 2022 vehicle models, including ones made by BMW, and some 2021 models.
The digital key uses so-called Ultra Wideband (UWB) technology, a form of radio transmission for which the sensor can tell the direction of the signal, sort of like a tiny radar. This lets the antenna in your phone locate and identify objects equipped with UWB transmitters. By using UWB technology, the Android user will be able to lock and unlock their vehicle without taking their phone out.
Consumers who own car models that have enabled NFC technology, or near-field communication, will be able unlock their car by tapping their phone against the door. The phone communicates with an NFC reader in the user’s car, which is typically located within the door handle. Google said users will also be able to securely and remotely share their car key with friends and family if they need to borrow the car.
The announcement follows a similar move made by Apple last year that allowed users to add a digital car key to their iPhone or Apple Watch. That feature, which was part of iOS 14, works over NFC and first became available in the 2021 BMW 5 Series.
A growing number of automakers have developed their own apps, which can also control certain functions such as remote locking and unlocking. The big benefit, in Google’s and likely Apple’s view, is that by offering the digital car key in its mobile operating system, users don’t have to download an app.
The intent is for a less clunky experience. And there’s a movement to make it even more seamless. The Car Connectivity Consortium, which Apple, Google, Samsung along with automakers BMW, GM, Honda, Hyundai and Volkswagen are members of, have spent the past several years creating an underlying agreement to make it easier to work in a seamless way and to standardize a digital key solution.
The development of the digital car key is just part of Google’s push to ensure the smartphone is the centerpiece of consumers’ lives. And it’s a goal that can’t be achieved without including vehicles.
“When purchasing a phone these days, we’re buying not only a phone, but also an entire ecosystem of devices that are all expected to work together — such as TVs, laptops, cars and wearables like smartwatches or fitness tracker, Google’s vp of engineering Erik Kay wrote in a blog post accompanying the announcement during the event. “In North America, the average person now has around eight connected devices, and by 2022, this is predicted to grow to 13 connected devices.”
Google said it is expanding its “fast pair” feature, which lets users pair their devices via Bluetooth with a single tap, to other products, including vehicles. To date, consumers have used “fast pair” more than 36 million times to connect their Android phones with Bluetooth accessories, including Sony, Microsoft, JBL, Philips, Google and many other popular brands, according to Kay.
The feature will be rolled out to more devices in the coming months, including Beats headphones as well as cars from BMW and Ford, Sameer Samat, VP of PM for Android & Google Play said during Google I/O.
For years, Wear OS has been, at best, something of a dark horse among Google operating systems. It’s certainly not for lack of partnership or investment, but for whatever reason, the company has never really stuck the landing with its wearable operating system.
It’s a category in which Apple has been utterly dominant for some time. Google has largely failed to chip away at that market, in spite of enlisting some of the biggest names in consumer electronics as partners. Figures from Strategy Analytics classify Wear OS among the “others” category.
Google’s strategy is, once again, the result of partnerships – or, more precisely, partnerships combined with acquisitions. At the top of the list is an “if you can’t beat ‘em, join em’” approach to Samsung’s longstanding preference for open-source Tizen. It seemed like one of the stranger plays in the category, but building out its own version of Tizen has proven a winning strategy for the company, which trails only Apple in the category.
We’re making the biggest update ever to @wearosbygoogle, including new capabilities for Google apps — like turn-by-turn navigation in Google Maps, or downloading songs from YouTube Music for offline listening… even if you leave your phone behind. #GoogleIO pic.twitter.com/vOnxnWl0MA
— Google (@Google) May 18, 2021
During today’s I/O keynote, the company company revealed a new partnership with Samsung, “combining the best of Wear OS and Tizen.” We’re still waiting to see how that will play out, but it will be fascinating watching two big players combine forces to take on Apple. You come at the king, you best not miss, to quote a popular prestige television program. On the developer side, this seems to allude to the ability to create joint apps for both platforms, as third-party app selection has been a sticking point for both.
The other big change sheds some more light on precisely why the company was interested in Fitbit. Sure the company was a wearables leader that dominated fitness bands and eventually created its own solid smartwatches (courtesy of, among other things, its own acquisition of Pebble), but health is really the key here.
Health monitoring has become the dominant conversation around wearables in recent years, and Google’s acquisition seems to be, above all, about integrating that information. “[A] world-class health and fitness service from Fitbit is coming to the platform,” the company noted. Beyond adding Fitbit’s well-loved tracking features, the company will also be integrated Wear features into Google’s hardware, working to blur the line between the two companies.
With a long-standing history of working together on the mobile side, it’s always been a bit of a surprise that Samsung hasn’t had much patience for Google’s wearables play. The hardware giant had flirted with Android Wear in the past, but for the last several years, it’s been invested in building out its own version of open-source operating system, Tizen.
Today, both companies announced a partnership featuring a “unified platform” between the two some-time competitors. The goal of the deal is to essentially create a way for devs to build apps for both Wear OS and Tizen at once. The deal makes sense from that perspective. Third-party apps have been something of a sticking point for both companies.
Even more to the point, it’s an opportunity for two smaller players in the space to join forces and take on Apple, which has been utterly dominant in the smartwatch category, more or less since the first Apple Watch arrived.
We’re combining the best of @wearosbygoogle and @SamsungMobile Tizen into a unified wearable platform. Apps will start faster, battery life will be longer and you'll have more choice than ever before, from devices to apps and watch faces. #GoogleIO pic.twitter.com/vj2aYZD81x
— Google (@Google) May 18, 2021
Wear OS has already gone through a number of cycles, including a big rebrand from Android OS a while back, but nothing has really stuck over the years, leaving the wearable operating as something of an also-ran. For now, at least, this is far from a full-throated embrace of Wear OS on Samsung’s part and appears to be something more akin to an “the enemy of my enemy” situation. Along with developing a unified API, the companies are joining forces to pluck the best from each operating system, including longer battery life — perhaps the largest hurdle facing smartwatches at the moment.
“We know that health and wellness are at the forefront of consumers’ minds, and we’re excited to continue building the industry-leading health experience on our new unified platform with Google,” Samsung said in a blog post. “As our consumers turn to wearable technology to monitor their wellbeing, we’re meeting these needs head on. By creating world-class health technology, we hope to elevate how users approach to their wellbeing, and enable them to make positive changes in their everyday lives.”
Samsung added that the next version of the Galaxy Watch will be the first to leverage this partnership, but offered little additional information on the hardware front. I’d anticipate big news on the Wear OS front in the next year. If nothing else, the company’s partnership with Google is a sign that it’s ready to go for broke with the platform.
The one-dozenth versions of Android has been something of an odd duck. The latest version of Google’s mobile operating was announced back in February, and a beta version is already available to developers. But up to now, we’ve known glancingly little about how the final version of Android 12 will actually look. Fittingly, the new version looks to be delivering one of the biggest design updates in recent memory.
The company unveiled Material You, a new cross-platform adaptable UI designed to give users more control over how their operating systems work. Among other things, that means that the content can be tailored to different design languages on the phones themselves. The feature will arrive first on Pixel phones this fall and will roll out to additional devices from there.
Android 12 will be the first to get the redesigned UI and widgets, along with other design elements. The company is calling it “the biggest design change to Android in years.” The news arrives as Google is announcing that there are now more than 3 billion Android devices currently in use.
Google’s Android operating system is now running on 3 billion active devices, Google announced at its (virtual) I/O developer conference today. In a briefing before today’s event, the company also noted that there were 250 million active tablets running Android last year, which is likely a larger number than some expected, but which explains Google’s increased focus on these large-screen devices at I/O this year.
Traditionally, Google shares new device stats at I/O, but since it canceled the event last year, we didn’t get an update for 2020. The most recent number Google provided was 2.5 billion active devices in May 2019. That was up from 2 billion devices in 2017, so at least for the time being, this growth rate of about 500 million new devices every two years continues to remain true.
In comparison, Apple in January announced that it has an install base of 1 billion iPhones and that there are now a total of 1.65 billion active devices in its ecosystem, up from 1.5 billion devices a year before (this last number includes all active Apple devices, though).
Google announced a series of upgrades to its Google Photos service, used by more than a billion users, at today’s Google I/O developer event, which was virtually streamed this year due to COVID. The company is rolling out Locked Folders, new types of photo “Memories” for reminiscing over past events, as well as a new feature called “Cinematic moments” that will animate a series of static photos, among other updates.
Today, Google Photos stores over 4 trillion photos and videos, but the majority of those are never viewed. To change that, Google has been developing AI-powered features to help its users reflect on meaningful moments from their lives. With Memories, launched in 2019, Google Photos is able to resurface photos and videos focused on people, activities and hobbies as well as recent highlights from the week prior.
At Google I/O, the company announced it’s adding a new type of Memory, which it’s calling “little patterns.” Using machine learning, little patterns looks for a set of three or more photos with similarities, like shape or color, which it then highlights as a pattern for you.
Image Credits: Google
For example, when one of Google’s engineers traveled the world with their favorite orange backpack, Google Photos was able to identify a pattern where that backpack was featured in photos from around the globe. But patterns may also be simple family photos that are often snapped in the same room with an identifiable piece of furniture, like the living room couch. On their own, these photos may not seem like much, but when they’re combined over time, they can produce some interesting compilations.
Google will also be adding Best of Month Memories and Trip highlights to the your photo grid, which you’ll now be able to remove or rename, as well as Memories featuring events you celebrate, like birthdays or holidays. These events will be identified based on a combination of factors, Google says. This includes by identifying objects in the photos — like a birthday cake or a Hanukkah menorah, for example — as well as by matching up the date of the photo with known holidays.
Image Credits: Google
Best of Month and Trip highlight Memories will start to roll out today and will be found in the photo grid itself. Later this year, you’ll begin to see Memories related to the events and moments you celebrate.
Image Credits: Google
Another forthcoming addition is Cinematic Moments, which is somewhat reminiscent of the “deep nostalgia” technology from MyHeritage that went viral earlier this year, as users animated the photos of long-past loved ones. Except in Google’s case, it’s not taking an old photo and bringing it to life, it’s stitching together a series of photos to create a sense of action and movement.
Google explains that, often, people will take multiple photos of the same moment in order to get one “good” image they can share. This is especially true when trying to capture something in motion — like a small child or a pet who can’t sit still.
Image Credits: Google
These new Cinematic moments build on the Cinematic photos feature Google launched in December 2020, which uses machine learning to create vivid, 3D versions of your photos. Using computational photography and neural networks to stitch together a series of near-identical photos, Google Photos will be able to create vivid, moving images by filling in the gaps in between your photos to create new frames. This feature doesn’t have a launch date at this time.
Of course, not all past moments are worthy of revisiting, for a variety of reasons. While Google already offered tools to hide certain photos and time periods from your Memories, it’s continuing to add new controls and, later this summer, will make it easier to access its existing toolset. One key area of focus has been working with the transgender community, who have said that revisiting their old photos can be painful.
Soon, users will also be able to remove a single photo from a Memory, remove their Best of Month Memories, and rename and remove Memories based on the events they celebrate, too.
Image Credits: Google
Another useful addition to Google Photos is the new Locked Folder, which is simply a passcode-protected space for private photos. Many users automatically sync their phone’s photos to Google’s cloud, but then want to pull up photos to show to others through the app on their phone or even their connected TV. That can be difficult if their galleries are filled with private photos, of course.
Image Credits: Google
This particular feature will launch first on Pixel devices, where users will have the option to save photos and videos directly from their camera to the Locked folder. Other Android devices will get the update later in the year.
As far as AI systems have come in their ability to recognize what you’re saying and respond, they’re still very easily confused unless you speak carefully and literally. Google has been working on a new language model called LaMDA that’s much better at following conversations in a natural way, rather than as a series of badly formed search queries.
LaMDA is meant to be able to converse normally about just about anything without any kind of prior training. This was demonstrated in a pair of rather bizarre conversations with an AI first pretending to be Pluto and then a paper airplane.
While the utility of having a machine learning model that can pretend to be a planet (or dwarf planet, a term it clearly resents) is somewhat limited, the point of the demonstration was to show that LaMDA could carry on a conversation naturally even on this random topic, and in the arbitrary fashion of the first person.
The advance here is basically preventing the AI system from being led off track and losing the thread when attempting to respond to a series of loosely associated questions.
Normal conversations between humans jump between topics and call back to earlier ideas constantly, a practice that confuses language models to no end. But LaMDA can at least hold its own and not crash out with a “Sorry, I don’t understand” or a non-sequitur answer.
While most people are unlikely to want to have a full, natural conversation with their phones, there are plenty of situations where this sort of thing makes perfect sense. Groups like kids and older folks who don’t know or don’t care about the formalized language we use to speak to AI assistants will be able to interact more naturally with technology, for instance. And identity will be important if this sort of conversational intelligence is built into a car or appliance. No one wants to ask “Google” how much milk is left in the fridge, but they might ask “Whirly” or “Fridgadore,” the refrigerator speaking for itself.
Even CEO Sundar Pichai seemed unsure as to what exactly this new conversational AI would be used for, and emphasized that it’s still a work in development. But you can probably expect Google’s AIs to be a little more natural in their interactions going forward. And you can finally have that long, philosophical conversation with a random item you’ve always wanted.
Google announced a new feature for its Chrome browser today that alerts you when one of your passwords has been compromised and then helps you automatically change your password with the help of… wait for it… Google’s Duplex technology.
This new feature will start to roll out slowly to Chrome users on Android in the U.S. soon (with other countries following later), assuming they use Chrome’s password-syncing feature.
It’s worth noting that this won’t work for every site just yet. As a Google spokesperson told us, “the feature will initially work on a small number of apps and websites, including Twitter, but will expand to additional sites in the future.”
Now you may remember Duplex as the somewhat controversial service that can call businesses for you to make hairdresser appointments or check opening times. Google introduced Duplex at its 2018 I/O developer conference and launched it to a wider audience in 2019. Since then, the team has chipped away at bringing Duplex to more tasks and brought it the web, too. Now it’s coming to Chrome to change your compromised passwords for you.
“Powered by Duplex on the Web, Assistant takes over the tedious parts of web browsing: scrolling, clicking and filling forms, and allows you to focus on what’s important to you. And now we’re expanding these capabilities even further by letting you quickly create a strong password for certain sites and apps when Chrome determines your credentials have been leaked online,” Patrick Nepper, senior product manager for Chrome, explains in today’s announcement.
In practice, once Chrome detects a compromised password, all you have to do is tap the “change password” button and Duplex will walk through the process of changing your password for you. Google says this won’t work for every site just yet, but “even if a site isn’t supported yet, Chrome’s password manager can always help you create strong and unique passwords for your various accounts.”
It’ll be interesting to see how well this works in the real world. Every site manages passwords a little bit differently, so it would be hard to write a set of basic rules that the browser could use to go through this process. And that’s likely why Google is using Duplex here. Since every site is a little bit different, it takes a system that can understand a bit more about the context of a password change page to successfully navigate it.
In addition to adding this feature, Google is also updating its password manager with a new tool for important passwords from third-party password managers, deeper integration between Chrome and Android and automatic password alerts when a password is compromised in a breach.
At its I/O developer conference, Google today announced the next generation of its custom Tensor Processing Units (TPU) AI chips. This is the fourth generation of these chips, which Google says are twice as fast as the last version. As Google CEO Sundar Pichai noted, these chips are then combined in pods with 4,096 v4 TPUs. A single pod delivers over one exaflop of computing power.
Google, of course, uses the custom chips to power many of its own machine learning services, but it will also make this latest generation available to developers as part of its Google Cloud platform.
“This is the fastest system we’ve ever deployed at Google and a historic milestone for us,” Pichai said. “Previously to get an exaflop you needed to build a custom supercomputer, but we already have many of these deployed today and will soon have dozens of TPUv4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. And our TPUv4 pods will be available to our cloud customers later this year.”
The TPUs were among Google’s first custom chips. While others, including Microsoft, decided to go with more flexible FPGAs for its machine learning services, Google made an early bet on these custom chips. They take a bit longer to develop — and quickly become outdated as technologies change — but can deliver significantly better performance.
After skipping a year, Google is holding a keynote for its developer conference Google I/O. While it’s going to be an all-virtual event, there should be plenty of announcements, new products and new features for Google’s ecosystem.
The conference starts at 10 AM Pacific Time (1 PM on the East Cost, 6 PM in London, 7 PM in Paris) and you can watch the live stream right here on this page.
Rumor has it that Google should give us a comprehensive preview of Android 12, the next major release of Google’s operating system. There could also be some news when it comes to Google Assistant, Home/Nest devices, Wear OS and more.
While Apple, Microsoft and the like were scrambling to bring their respective developer conferences online, Google made the executive design to just scrap I/O outright last year. It was a bit of an odd one, but the show went on through news-related blog posts.
While we’re going to have to wait another year to darken the doors of Mountain View’s Shoreline Amphitheater, the company has opted to go virtual for the 2021 version of the show. Understandably so. Google apparently has a lot up its sleeves this time.
Last month, Alphabet CEO Sundar Pichai teased some big news on the tech giant’s investor call, noting, “Our product releases are returning to a regular cadence. Particularly excited that our developer event — Google I/O — is back this year, all virtual, and free for everyone on May 18th-20th. We’ll have significant product updates and announcements, and I invite you all to tune in.”
From the sound of it, next week’s event will find Google returning to form following what was a rough year for just about everyone. So, what can we expect from the developer-focused event?
Forest Row, East Sussex, UK – July 30th 2013: Android figure shot in home studio on white. Image Credits: juniorbeep / Getty Images
Android 12 is the biggie, of course. From a software development standpoint, it’s a lynchpin to Google’s ecosystem, and for good reason has pretty much always taken centerstage at the event.
The developer version of Google’s mobile operating system has been kicking for a while now, but it has offered surprisingly little insight into what features might be coming. That’s either because it’s going to be a relatively minor upgrade as far as these things go or because the company it choosing to leave something to the imagination ahead of an official unveiling.
What we do know so far is that the operating system is getting a design upgrade. Beyond that, however, there are still a lot of question marks.
Google Assistant is likely to get some serious stage time, as well, coupled with some updates to the company’s ever-growing Home/Nest offerings. Whether that will mean, say, new smart displays on Nest speakers is uncertain. Keep in mind, hardware is anything but a given. The big Pixel event, after all, generally comes in the fall. That said, June is an ideal mid-marker during the year to refresh some other lines.
Image Credits: Google
The likeliest candidate for new hardware (if there is any) is a new version of the company’s fully wireless earbuds — which the company has accidentally leaked out once or twice. The Pixel Buds A are said to sport faster pairing, and if their name is any indication, will be a budget entry.
Speaking of which… earlier this year, Google made the rather unorthodox announcement confirming that the Pixel 5a 5G is on the way. Denying rumors that have been swirling around the Pixel line generally, the company told TechCrunch in a statement, “Pixel 5a 5G is not cancelled. It will be available later this year in the U.S. and Japan and announced in line with when last year’s a-series phone was introduced.” Given that the 4a arrived in August, we could well be jumping the gun here. Taken as a broader summer time frame, however, it’s not entirely out of the realm of possibility here.
NEW YORK, NY – SEPTEMBER 13: Michael Kors and Google Celebrate new MICHAEL KORS ACCESS Smartwatches at ArtBeam on September 13, 2017 in New York City. (Photo by Dimitrios Kambouris/Getty Images for Michael Kors)
Wear OS has felt like an also-ran basically for forever. Rebrands, revamps and endless hardware partners have done little to change that fact. But keep in mind, this is going to be Google’s first major event since closing the Fitbit acquisition, so it seems like a no-brainer that the company’s going to want to come on strong with its wearable/fitness play. And hey, just this week, rumor broke that Samsung might be embracing the operating system after years of customizing Tizen.
Things kick off Tuesday morning May 18 at 10 a.m. PT, 1 p.m. ET with a big keynote.