Satellite connectivity company Swarm has come out with a new product that will give anyone the ability to create a messaging or Internet of Things (IoT) device, whether that be a hiker looking to stay connected off-the-grid or a hobbyist wanting to track the weather.
The Swarm Evaluation Kit is an all-in-one product that includes a Swarm Tile, the company’s flagship modem device, a VHF antenna, a small solar panel, a tripod, a Feather S2 development board and an OLED from Adafruit. The entire kit comes in at less than six pounds and costs $499. The package may sound intimidatingly technical, but Swarm CEO Sara Spangelo explained to TechCrunch that it was designed to be user-friendly, from the most novice consumer all the way through to more advanced users.
It “was super intentional to call it an Evaluation kit because it’s not a finished product,” Spangelo explained. “It serves two different kinds of groups. The first group is people that want to be able to do messaging anywhere that they are on the planet for a really low cost […] The second group of people will be the tinkerers and the hobbyists and educational folks.”
Swarm CEO and co-founder Sara Spangelo Image Credits: Swarm
This is the second consumer product that Swarm has on offer, after it went commercially live with its flagship Swarm Tile earlier this year. The Swarm Tile is a key component of the company’s ecosystem, which is comprised of a few different components: the Tile, a kind of modem that can be embedded in different things and what the customer interfaces with; the satellite network; and a ground station network, which is how the company downlinks data. The Tile is designed for maximum compatibility, so Swarm serves customers across sectors including shipping, logistics, and agriculture.
“One of the cool things about Swarm is that we’re infrastructure,” she said. “We’re like cellphone towers, so anyone can use us across any vertical.” Some of the use cases she highlighted included customers using Tile in soil moisture sensors, or in asset tracking in the trucking industry.
A major part of Swarm’s business model is its low cost, with a Swarm Tile costing $119 and the connectivity service available for only $5 per month per connected device. Spangelo credits not only the engineering innovations in the tiny devices and satellites, but the gains in launch economics, especially for small satellite developers like Swarm. The company also sells direct, which further reduces overhead.
Swarm was founded by Spangelo, a pilot and aerospace engineering PhD who spent time at NASA’s Jet Propulsion Lab and at Google on its drone delivery project, Wing. She told TechCrunch that Swarm started as a hobby project between her and co-founder Ben Longmier, who had previously founded a company called Aether Industries that made high-altitude balloon platforms.
“Then [we] realized that we could do communications at speeds that were similar to what the legacy players are doing today,” Sara Spangelo said. “There was a lot of buzz around connectivity,” she added, noting that initiatives like Project Loon were garnering a lot of funding. But instead of trying to match the size and scale of some of these multi-year projects, they decided to go small.
In the four and a half years since the company’s founding, Swarm has put up a network of 120 sandwich-sized satellites into low Earth orbit and grown its workforce to 32 people. They’ve also been busy onboarding customers that use the Tile. One hope is that the Kit will be an additional way to draw customers to Swarm’s service.
Spangelo said the kit is for “everybody in between, that likes to just play with things. And it’s not just playing – the playing leads to innovations and ideas, and then it gets deployed out into the world.”
As phones and other consumer devices have gained feature after feature, they have also declined in how easily they can be repaired, with Apple at the head of this ignoble pack. The FTC has taken note, admitting that the agency has been lax on this front but that going forward it will prioritize what could be illegal restrictions by companies as to how consumers can repair, repurpose, and reuse their own property.
Devices are often built today with no concessions made towards easy repair or refurbishment, or even once routine upgrades like adding RAM or swapping out an ailing battery. While companies like Apple do often support hardware for a long time in some respects, the trade-off seems to be that if you crack your screen, the maker is your only real option to fix it.
That’s a problem for many reasons, as right-to-repair activist and iFixit founder Kyle Wiens has argued indefatigably for years (the company posted proudly about the statement on its blog). The FTC sought comment on this topic back in 2019, issued a report on the state of things a few months ago, and now (perhaps emboldened by new Chair Lina Khan’s green light to all things fearful to big tech companies) has issued a policy statement.
The gist of the unanimously approved statement is that they found that the practice of deliberately restricting repairs may have serious repercussions, especially among people who don’t have the cash to pay the Apple tax for what ought to be (and once was) a simple repair.
The Commission’s report on repair restrictions explores and discusses a number of these issues and describes the hardships repair restrictions create for families and businesses. The Commission is concerned that this burden is borne more heavily by underserved communities, including communities of color and lower-income Americans. The pandemic exacerbated these effects as consumers relied more heavily on technology than ever before.
While unlawful repair restrictions have generally not been an enforcement priority for the Commission for a number of years, the Commission has determined that it will devote more enforcement resources to combat these practices. Accordingly, the Commission will now prioritize investigations into unlawful repair restrictions under relevant statutes…
The statement then makes four basic points. First, it reiterates the need for consumers and other public organizations to report and characterize what they perceive as unfair or problematic repair restrictions. The FTC doesn’t go out and spontaneously investigate companies, it generally needs a complaint to set the wheels in motion, such as people alleging that Facebook is misusing their data.
Second is a surprising antitrust tie-in, where the FTC says it will look at said restrictions aiming to answer whether monopolistic practices like tying and exclusionary design are in play. This could be something like refusing to allow upgrades, then charging an order of magnitude higher than market price for something like a few extra gigs of storage or RAM, or designing products in such a way that it moots competition. Or perhaps arbitrary warranty violations for doing things like removing screws or taking the device to third party for repairs. (Of course, these would depend on establishing monopoly status or market power for the company, something the FTC has had trouble doing.)
More in line with the FTC’s usual commercial regulations, it will assess whether the restrictions are “unfair acts or practices,” which is a much broader and easier to meet requirement. You don’t need a monopoly to make claims of an “open standard” to be misleading, or for a hidden setting to slow the operations of third party apps or peripherals, for instance.
And lastly the agency mentions that it will be working with states in its push to establish new regulations and laws. This is perhaps a reference to the pioneering “right to repair” bills like the one passed by Massachusetts last year. Successes and failures along those lines will be taken into account and the feds and state policymakers will be comparing notes.
This isn’t the first movement in this direction by a long shot, but it is one of the plainest. Tech companies have seen the writing on the wall, and done things like expand independent repair programs — but it’s arguable that these actions were taken in anticipation of the FTC’s expected shift toward establishing hard lines on the topic.
The FTC isn’t showing its full hand here, but it’s certainly hinting that it’s ready to play if the companies involved want to push their luck. We’ll probably know more soon once it starts ingesting consumer complaints and builds a picture of the repair landscape.
Regularly testing waterways and reservoirs is a never-ending responsibility for utility companies and municipal safety authorities, and generally — as you might expect — involves either a boat or at least a pair of waders. Nixie does the job with a drone instead, making the process faster, cheaper, and a lot less wet.
The most common methods of testing water quality haven’t changed in a long time, partly because they’re effective and straightforward, and partly because really, what else are you going to do? No software or web platform out there is going to reach into the middle of the river and pull out a liter of water.
But with the advent of drones powerful and reliable enough to deploy in professional and industrial circumstances, the situation has changed. Nixie is a solution by the drone specialists at Reign Maker, involving either a custom-built sample collection arm or an in-situ sensor arm.
The sample collector is basically a long vertical arm with a locking cage for a sample container. You put the empty container in there, fly the drone out to the location, then submerge the arm. When it flies back, the filled container can be taken out while the drone hovers and a fresh one put in its place to bring to the next spot. (This switch can be done safely in winds up to 18 MPH and sampling in currents up to 5 knots, the company said.)
This allows for quick sampling at multiple locations — the drone’s battery will last about 20 minutes, enough for two to four samples depending on the weather and distance. Swap the battery out and drive to the next location and do it all again.
For comparison, Reign Maker pointed to New York’s water authority, which collects 30 samples per day from boats and other methods, at an approximate cost (including labor, boat fuel, etc) of $100 per sample. Workers using Nixie were able to collect an average of 120 samples per day, for around $10 each. Sure, New York is probably among the higher cost locales for this (like everything else) but the deltas are pretty huge. (The dipper attachment itself costs $850, but doesn’t come with a drone.)
It should be mentioned that the drone is not operating autonomously; it has a pilot who will be flying with line of sight (which simplifies regulations and requirements). But even so, that means a team of two, with a handful of spare batteries, can cover the same space that would normally take a boat crew and more than a little fuel. Currently the system works with the M600 and M300 RTK drones from DJI.
The drone method has the added benefits of having precise GPS locations for each sample and of not disturbing the water when it dips in. No matter how carefully you step or pilot a boat, you’re going to be pushing the water all over the place, potentially affecting the contents of the sample, but that’s not the case if you’re hovering overhead.
In development is a smarter version of the sampler that includes a set of sensors that can do on-site testing for all the most common factors: temperature, pH, troubling organisms, various chemicals. Skipping the step of bringing the water back to a lab for testing streamlines the process immensely, as you might expect.
Right now Reign Maker is working with New York’s Department of Environmental Protection and in talks with other agencies. While the system would take some initial investment, training, and getting used to, it’s probably hard not to be tempted by the possibility of faster and cheaper testing.
Ultimately the company hopes to offer (in keeping with the zeitgeist) a more traditional SaaS offering involving water quality maps updating in real time with new testing. That too is still in the drawing-board phase, but once a few customers sign up it starts looking a lot more attractive.
Playdate, app and game designer Panic’s first shot at hardware, finally has a firm price and ship date, as well as a bunch of surprise features cooked up since its announcement in 2019. The tiny handheld gaming console will cost $179, ship next month, and come with a 24-game “season” doled out over 12 weeks. But now it also has a cute speaker dock and low-code game creation platform.
We first heard about Playdate more than two years ago, were charmed by its clean look, funky crank control, and black and white display, and have been waiting for news ever since. Panic’s impeccable design credentials combined with Teenage Engineering’s creative hardware chops? It’s bound to be a joy to use, but there wasn’t much more than that to go on.
Now the company has revealed all the important details we were hoping for, and many more to boot.
Originally we were expecting 12 games to be delivered over 12 weeks, but in the intervening period it seems they’ve collected more titles than planned, and that initial “season” of games has expanded to 24. No one knows exactly what to expect from these games except that they’re exclusive to the Playdate and many use the crank mechanic in what appear to be fun and interesting ways: turning a turntable, opening a little door, doing tricks as a surfer, and so on.
The team hasn’t decided how future games will be distributed, though they seem to have some ideas. Another season? One-off releases? Certainly the presence of a new game by one-man indie hit parade Lucas Pope would sell like hotcakes.
But the debut of a new lo-fi game development platform called Pulp suggests a future where self-publishing may also be an option. This lovely little web-based tool lets anyone put together a game using presets for things like controls and actions, and may prove to be a sort of tiny Twine in time.
A dock accessory was announced as well, something to keep your Playdate front and center on your desk. The speaker-equipped dock, also a lemony yellow, acts as a magnetic charging cradle for the console, activating a sort of stationary mode with a clock and music player (Poolsuite.fm, apparently, with original relaxing tunes). It even has two holes in which to put your pens (and Panic made a special yellow pen just for the purpose as well).
The $179 price may cause some to balk — after all, it’s considerably more than a Nintendo 3DS and with the dock probably approaches the price of a Switch. But this isn’t meant to be a competitor with mainstream gaming — instead, it’s a sort of anti-establishment system that embraces weirdness and provides something equally unfamiliar and undeniably fun.
The team says that there will be a week’s warning before orders can be placed, and that they don’t plan to shut orders down if inventory runs out, but simply allow people to preorder and cancel at will until they receive their unit. We hope to get one ourselves to test and review, but since part of the charm of the whole thing is the timed release and social aspect of discovery and sharing, it’s more than likely we’ll be experiencing it along with everyone else.
Today’s WWDC keynote from Apple covered a huge range of updates. From a new macOS to a refreshed watchOS to a new iOS, better privacy controls, FaceTime updates, and even iCloud+, there was something for everyone in the laundry list of new code.
Apple’s keynote was essentially what happens when the big tech companies get huge; they have so many projects that they can’t just detail a few items. They have to run down their entire parade of platforms, dropping packets of news concerning each.
But despite the obvious indication that Apple has been hard at work on the critical software side of its business, especially its services-side (more here), Wall Street gave a firm, emphatic shrug.
This is standard but always slightly confusing.
Investors care about future cash flows, at least in theory. Those future cash flows come from anticipated revenues, which are born from product updates, driving growth in sales of services, software, and hardware. Which, apart from the hardware portion of the equation, is precisely what Apple detailed today.
And lo, Wall Street looked upon the drivers of its future earnings estimates, and did sayeth “lol, who really cares.”
Shares of Apple were down a fraction for most of the day, picking up as time passed not thanks to the company’s news dump, but because the Nasdaq largely rose as trading raced to a close.
Here’s the Apple chart, via YCharts:
And here’s the Nasdaq:
Presuming that you are not a ChartMaster, those might not mean much to you. Don’t worry. The charts say very little all-around so you are missing little. Apple was down a bit, and the Nasdaq up a bit. Then the Nasdaq went up more, and Apple’s stock generally followed. Which is good to be clear, but somewhat immaterial.
So after yet another major Apple event that will help determine the health and popularity of every Apple platform — key drivers of lucrative hardware sales! — the markets are betting that all their prior work estimating the True and Correct value of Apple was dead-on and that there is no need for any sort of up-or-down change.
That, or Apple is so big now that investors are simply betting it will grow in keeping with GDP. Which would be a funny diss. Regardless, more from the Apple event here in case you are behind.
Apple is rolling out some updates to iCloud under the name iCloud+. The company is announcing those features at its developer conference. Existing paid iCloud users are going to get those iCloud+ features for the same monthly subscription price.
In Safari, Apple is going to launch a new privacy feature called Private Relay. It sounds a bit like the new DNS feature that Apple has been developing with Cloudflare. Originally named Oblivious DNS-over-HTTPS, Private Relay could be a better name for something quite simple — a combination of DNS-over-HTTPS with proxy servers.
When Private Relay is turned on, nobody can track your browsing history — not your internet service provider, anyone standing in the middle of your request between your device and the server you’re requesting information from. We’ll have to wait a bit to learn more about how it works exactly.
The second iCloud+ feature is ‘Hide my email’. It lets you generate random email addresses when you sign up to a newsletter or when you create an account on a website. If you’ve used ‘Sign in with Apple’, you know that Apple offers you the option to use fake iCloud email addresses. This works similarly, but for any app.
Finally, Apple is overhauling HomeKit Secure Video. With the name iCloud+, Apple is separating free iCloud users from paid iCloud users. Basically, you used to pay for more storage. Now, you pay for more storage and more features. Subscriptions start at $0.99 per month for 50GB (and iCloud+ features).
More generally, Apple is adding two much needed to iCloud accounts. Now, you can add a friend for account recovery. This way, you can request access to your data to your friend. But that doesn’t mean that your friend can access your iCloud data — it’s just a way to recover your account.
The last much-needed update is a legacy feature. You’ll soon be able to add one or several legacy contacts. Data can be passed along when you pass away. And this is a much needed feature as many photo libraries become inaccessible when someone close to you passes away.
During the virtual keynote of WWDC, Apple shared the first details about iOS 15, the next major version of iOS that is going to be released later this year. There are four pillars with this year’s release: staying connected, focusing without distraction, using intelligence and exploring the world.
“For many of us, our iPhones have become indispensable,” SVP of Software Engineering Craig Federighi said. “Our new release is iOS 15. It’s packed with features that make the iOS experience adapt to and complement the way you use iPhone, whether it’s staying connected with those who matter to you most, finding the space to focus without distraction, using intelligence to discover the information you need or exploring the world around you.”
Apple is adding spatial audio to FaceTime. Now the voices are spread out depending on the position of your friends on the screen. For instance, if someone appears on the left, it’ll sound like they’re on the left in your ears. In other FaceTime news, iOS now detects background noise and tries to suppress it so that you can hear your friends and family members more easily. That’s an optional feature, which means that you can disable it in case you’re showing a concert during a FaceTime call for instance.
Another FaceTime feature is “Portrait mode”. Behind this term, Apple means that it can automatically blur the background, like in “Portrait mode” photos. In case you want to use FaceTime for work conferences, you can now generate a FaceTime link and add it to a calendar invite. FaceTime will also work in a web browser, which means that people without an Apple device can join a FaceTime call. All of these features make FaceTime more competitive with other video call services, such as Zoom and Google Meet.
FaceTime is a big focus as Apple is also introducing SharePlay. With this feature, you can listen together to a music album. Press play in Apple Music and the music will start for everyone on the call. The queue is shared with everyone else, which means anyone can add songs, skip to the next track, etc.
SharePlay also lets you watch movies and TV shows together. Someone on the call starts a video and it starts on your friend’s phone or tablet. It is also compatible with AirPlay, picture-in-picture and everything you’d expect from videos on iOS.
This isn’t just compatible with videos in the Apple TV app. Apple said there will be an API to make videos compatible with SharePlay. Partners include Disney+, Hulu, HBO Max, Twitch, TikTok and more. Here’s a screenshot of the initial partners:
Now let’s switch to Messages. The app is getting better integration with other Apple apps like News, Photos and Music. Items shared via Messages show up in those apps. In other words, Messages (and iMessage) is acting as the social layer on top of Apple’s apps.
Apple is going to use on-device intelligence to create summaries of your notifications. Instead of being sorted by apps and by date, it is sorted by priority. For instance, notifications from friends will be closer to the top.
When you silence notifications, your iMessage contacts will see that you have activated “Do not disturb”. It works a bit like “Do not disturb” in Slack. But there are new settings. Apple calls this Focus mode. You can choose apps and people you want notifications from and change your focus depending on what you’re doing.
For instance, if you’re at work, you can silence personal apps and personal calls and messages. If it’s the weekend, you can silence your work emails. Your settings sync across your iCloud account if you have multiple Apple devices. And it’ll even affect your home screen by showing and hiding apps and widgets.
Apple is going to scan your photos for text. Called Live Text, this feature lets you highlight, copy and paste text in photos. It could be a nice accessibility feature as well. And, iOS is going to leverage that info for Spotlight. You can search from text in your photos directly in Spotlight. These features are handled on-device directly.
With iOS 15, memories are getting an upgrade. “These new memories are built on the fly. They are interactive and alive,” said Chelsea Burnette, senior manager, Photos Engineering. Memories are those interactive movies that you can watch in the Photos app. Now, you can tap with your finger to pause the movie. While music still plays in the background, your photo montage resumes when you lift your finger.
You can now search for a specific song to pair with a memory. It’s going to be interesting to see in detail what’s new for the Photos app.
After a recap of all the features of Apple Wallet, the company announced that you’ll be able to scan your ID and store it in Wallet. It’ll be available in participating states so it’s going to be a slow rollout. When a government service wants some info from your ID, you can choose to share some data with this service directly on your iPhone.
When it comes to the Weather app, it has been updated to include many of the features that were available in Dark Sky, a popular weather app that has been acquired by Apple. Expect a new design and more data.
As for Apple Maps, the new mapping data has been rolled out in several countries and Apple is still rolling it out in Europe. Apple has added a ton of new details to some areas, such as San Francisco. You can see bus and taxi lanes, crosswalks, bike lanes, etc. On highways, you see complex interchanges in 3D. All of this is also coming to Car Play later this year.
With transit, users can pin their favorite lines and view info on their Apple Watch. When you’re in a subway or bus, you can see your location in real time. It sounds a bit like Citymapper’s itinerary feature. You can also get directions in augmented reality by holding your phone in front of you.
Apple is also announcing a bunch of new features for users who have AirPods. There’s a new conversation mode that makes it a smart hearing aid to boost conversation volume. You’ll also get more notifications if you’ve activated the “Announce notifications” setting. You can tweak that setting to limit it to certain apps and change depending on your focus mode.
You can also find your AirPods with the Find My app with audio notifications even when they’re in the case. Spatial audio is coming to the Apple TV and Macs with an M1 chip. As announced a few weeks ago, spatial audio for Apple Music is launching right now.
As you can see, iOS 15 is packed with new features. Apple is releasing a developer beta with an initial release today. The public beta phase will start in July. You can expect beta updates throughout the summer and a final release this fall.
Today, Apple is holding a (virtual) keynote on the first day of its developer conference, and the company is expected to talk about a ton of software updates. At 10 AM PT (1 PM in New York, 6 PM in London, 7 PM in Paris), you’ll be able to watch the event right here as the company is streaming it live.
As usual with Apple’s developer conferences, you can expect to learn more about the next major updates of the company’s operating systems. Get ready for iOS 15, iPadOS 15, a new version of macOS and some updates for watchOS and tvOS as well.
But Apple could also use this opportunity to unveil some new products that are particularly popular with developers. Apple has already shipped several laptops and desktop computers with its own ARM-based M1 chip.
High-end models haven’t been updated yet. Rumor has it that Apple could use today’s opportunity to unveil a new iMac Pro, updated MacBook Pro models or even a new external display.
You can watch the live stream directly on this page, as Apple is streaming its conference on YouTube.
If you have an Apple TV, you don’t need to download a new app. You can open the Apple TV app and find the Apple Events section. It lets you stream today’s event and rewatch old ones.
And if you don’t have an Apple TV and don’t want to use YouTube, the company also lets you live stream the event from the Apple Events section on its website. This video feed now works in all major browsers — Safari, Firefox, Microsoft Edge and Google Chrome.
This year’s I/O event from Google was heavy on the “we’re building something cool” and light on the “here’s something you can use or buy tomorrow.” But there were also some interesting surprises from the semi-live event held in and around the company’s Mountain View campus. Read on for all the interesting bits.
We’ve known Android 12 was on its way for months, but today was our first real look at the next big change for the world’s most popular operating system. A new look, called Material You (yes), focuses on users, apps, and things like time of day or weather to change the UI’s colors and other aspects dynamically. Some security features like new camera and microphone use indicators are coming, as well as some “private compute core” features that use AI processes on your phone to customize replies and notifications. There’s a beta out today for the adventurous!
Subhed says it all (but read more here). Up from 2 billion in 2017.
Millions of people and businesses use Google’s suite of productivity and collaboration tools, but the company felt it would be better if they weren’t so isolated. Now with Smart Canvas you can have a video call as you work on a shared doc together and bring in information and content from your Drive and elsewhere. Looks complicated, but potentially convenient.
It’s a little too easy to stump AIs if you go off script, asking something in a way that to you seems normal but to the language model is totally incomprehensible. Google’s LaMDA is a new natural language processing technique that makes conversations with AI models more resilient to unusual or unexpected queries, making it more like a real person and less like a voice interface for a search function. They demonstrated it by showing conversations with anthropomorphized versions of Pluto and a paper airplane. And yes, it was exactly as weird as it sounds.
One of the most surprising things at the keynote had to be Project Starline, a high-tech 3D video call setup that uses Google’s previous research and Lytro DNA to show realistic 3D avatars of people on both sides of the system. It’s still experimental but looks very promising.
Few people want to watch a movie on their smartwatch, but lots of people like to use it to track their steps, meditation, and other health-related practices. Wear OS is getting a bunch of Fitbit DNA infused, with integrated health tracking stuff and a lot of third party apps like Calm and Flo.
These two mobile giants have been fast friends in the phone world for years, but when it comes to wearables, they’ve remained rivals. In the face of Apple’s utter dominance in the smartwatch space, however, the two have put aside their differences and announced they’ll work on a “unified platform” so developers can make apps that work on both Tizen and Wear OS.
Apparently Google and Samsung realized that no one is going to buy foldable devices unless they do some really cool things, and that collaboration is the best way forward there. So the two companies will also be working together to improve how folding screens interact with Android.
The smart TV space is a competitive one, and after a few starts Google has really made it happen with Android TV, which the company announced had reached 80 million monthly active devices — putting it, Roku, and Amazon (the latter two with around 50 million monthly active accounts) all in the same league. The company also showed off a powerful new phone-based remote app that will (among other things) make putting in passwords way better than using the d-pad on the clicker. Developers will be glad to hear there’s a new Google TV emulator and Firebase Test Lab will have Android TV support.
Well, assuming you have a really new Android device with a UWB chip in it. Google is working with BMW first, and other automakers soon most likely, to make a new method for unlocking the car when you get near it, or exchanging basic commands without the use of a fob or Bluetooth. Why not Bluetooth you ask? Well, Bluetooth is old. UWB is new.
Google and its sibling companies are both leaders in AI research and popular platforms for others to do their own AI work. But its machine learning development tools have been a bit scattershot — useful but disconnected. Vertex is a new development platform for enterprise AI that puts many of these tools in one place and integrates closely with optional services and standards.
Google does a lot of machine learning stuff. Like, a LOT a lot. So they are constantly working to make better, more efficient computing hardware to handle the massive processing load these AI systems create. TPUv4 is the latest, twice as fast as the old ones, and will soon be packaged into 4,096-strong pods. Why 4,096 and not an even 4,000? The same reason any other number exists in computing: powers of 2.
Google Photos is a great service, and the company is trying to leverage the huge library of shots most users have to find patterns like “selfies with the family on the couch” and “traveling with my lucky hat” as fun ways to dive back into the archives. Great! But they’re also taking two photos taken a second apart and having an AI hallucinate what comes between them, leading to a truly weird looking form of motion that shoots deep, deep into the uncanny valley, from which hopefully it shall never emerge.
Google’s “AI makes a hair appointment for you” service Duplex didn’t exactly set the world on fire, but the company has found a new way to apply it. If you forget your password, Duplex will automatically fill in your old password, pick a new one and let you copy it before submitting it to the site, all by interacting with the website’s normal reset interface. It’s only going to work on Twitter and a handful of other sites via Chrome for now, but hey, if it happens to you a lot, maybe it’ll save you some trouble.
The aged among our readers may remember Froogle, Google’s ill-fated shopping interface. Well, it’s back… kind of. The plan is to include lots of product information, from price to star rating, availability and other info, right in the Google interface when you search for something. It sucks up this information from retail sites, including whether you have something in your cart there. How all this benefits anyone more than Google is hard to imagine, but naturally they’re positioning it as wins all around. Especially for new partner Shopify. (Me, I use DuckDuckGo.)
A lot of developers have embraced Google’s Flutter cross-platform UI toolkit. The latest version, announced today, adds some safety settings, performance improvements, and workflow updates. There’s lots more coming, too.
Popular developer platform Firebase got a bunch of new and updated features as well. Remote Config gets a nice update allowing developers to customize the app experience to individual user types, and App Check provides a basic level of security against external threats. There’s plenty here for devs to chew on.
The beta for the next version of Google’s Android Studio environment is coming soon, and it’s called Arctic Fox. It’s got a brand new UI building toolkit called Jetpack Compose, and a bunch of accessibility testing built in to help developers make their apps more accessible to people with disabilities. Connecting to devices to test on them should be way easier now too. Oh, and there’s going to be a version of Android Studio for Apple Silicon.
Google is working on a video calling booth that uses 3D imagery on a 3D display to create a lifelike image of the people on both sides. While it’s still experimental, “Project Starline” builds on years of research and acquisitions, and could be the core of a more personal-feeling video meeting in the near future.
The system was only shown via video of unsuspecting participants, who were asked to enter a room with a heavily obscured screen and camera setup. Then the screen lit up with a video feed of a loved one, but in a way none of them expected:
“I could feel her and see her, it was like this 3D experience. It was like she was here.”
“I felt like I could really touch him!”
“It really, really felt like she and I were in the same room.”
CEO Sundar Pichai explained that this “experience” was made possible with high-resolution cameras and custom depth sensors, almost certainly related to these Google research projects into essentially converting videos of people and locations into interactive 3D scenes:
The cameras and sensors — probably a dozen or more hidden around the display — capture the person from multiple angles and figure out their exact shape, creating a live 3D model of them. This model and all the color and lighting information is then (after a lot of compression and processing) sent to the other person’s setup, which shows it in convincing 3D. It even tracks their heads and bodies to adjust the image to their perspective.
But 3D TVs have more or less fallen by the wayside; turns out no one wants to wear special glasses for hours at a time, and the quality on glasses-free 3D was generally pretty bad. So what’s making this special 3D image?
Pichai said “we have developed a breakthrough light field display,” probably with the help of the people and IP it scooped up from Lytro, the light field camera company that didn’t manage to get its own tech off the ground and dissolved in 2018.
Light field cameras and displays create and show 3D imagery using a variety of techniques that are very difficult to explain or show in 2D. The startup Looking Glass has made several that are extremely arresting to view in person, showing 3D models and photographic scenes that truly look like tiny holograms.
Whether Google’s approach is similar or different, the effect appears to be equally impressive, as the participants indicate. They’ve been testing this internally and are getting ready to send out units to partners in various industries (such as medicine) where the feeling of a person’s presence makes a big difference.
At this point Project Starline is still very much a prototype, and probably a ridiculously expensive one — so don’t expect to get one in your home any time soon. But it’s not wild to think that a consumer version of this light field setup may be available down the line. Google promises to share more later this year.
After skipping a year, Google is holding a keynote for its developer conference Google I/O. While it’s going to be an all-virtual event, there should be plenty of announcements, new products and new features for Google’s ecosystem.
The conference starts at 10 AM Pacific Time (1 PM on the East Cost, 6 PM in London, 7 PM in Paris) and you can watch the live stream right here on this page.
Rumor has it that Google should give us a comprehensive preview of Android 12, the next major release of Google’s operating system. There could also be some news when it comes to Google Assistant, Home/Nest devices, Wear OS and more.
Orbital imagery is in demand, and if you think having daily images of everywhere on Earth is going to be enough in a few years, you need a lesson in ambition. Alba Orbital is here to provide it with its intention to provide Earth observation at intervals of 15 minutes rather than hours or days — and it just raised $3.4 million to get its next set of satellites into orbit.
Alba attracted our attention at Y Combinator’s latest demo day; I was impressed with the startup’s accomplishment of already having six satellites in orbit, which is more than most companies with space ambition ever get. But it’s only the start for the company, which will need hundreds more to begin to offer its planned high-frequency imagery.
The Scottish company has spent the last few years in prep and R&D, pursuing the goal, which some must have thought laughable, of creating a solar-powered Earth observation satellite that weighs in at less than one kilogram. The joke’s on the skeptics, however — Alba has launched a proof of concept and is ready to send the real thing up as well.
Little more than a flying camera with a minimum of storage, communication, power and movement, the sub-kilogram Unicorn-2 is about the size of a soda can, with paperback-size solar panel wings, and costs in the neighborhood of $10,000. It should be able to capture up to 10-meter resolution, good enough to see things like buildings, ships, crops, even planes.
“People thought we were idiots. Now they’re taking it seriously,” said Tom Walkinshaw, founder and CEO of Alba. “They can see it for what it is: a unique platform for capturing data sets.”
Indeed, although the idea of daily orbital imagery like Planet’s once seemed excessive, in some situations it’s quite clearly not enough.
“The California case is probably wildfires,” said Walkinshaw (and it always helps to have a California case). “Having an image once a day of a wildfire is a bit like having a chocolate teapot… not very useful. And natural disasters like hurricanes, flooding is a big one, transportation as well.”
Walkinshaw noted that they company was bootstrapped and profitable before taking on the task of launching dozens more satellites, something the seed round will enable.
“It gets these birds in the air, gets them finished and shipped out,” he said. “Then we just need to crank up the production rate.”
When I talked to Walkinshaw via video call, 10 or so completed satellites in their launch shells were sitting on a rack behind him in the clean room, and more are in the process of assembly. Aiding in the scaling effort is new investor James Park, founder and CEO of Fitbit — definitely someone who knows a little bit about bringing hardware to market.
Interestingly, the next batch to go to orbit (perhaps as soon as in a month or two, depending on the machinations of the launch provider) will be focusing on nighttime imagery, an area Walkinshaw suggested was undervalued. But as orbital thermal imaging startup Satellite Vu has shown, there’s immense appetite for things like energy and activity monitoring, and nighttime observation is a big part of that.
The seed round will get the next few rounds of satellites into space, and after that Alba will be working on scaling manufacturing to produce hundreds more. Once those start going up it can demonstrate the high-cadence imaging it is aiming to produce — for now it’s impossible to do so, though Alba already has customers lined up to buy the imagery it does get.
The round was led by Metaplanet Holdings, with participation by Y Combinator, Liquid2, Soma, Uncommon Denominator, Zillionize and numerous angels.
As for competition, Walkinshaw welcomes it, but feels secure that he and his company have more time and work invested in this class of satellite than anyone in the world — a major obstacle for anyone who wants to do battle. It’s more likely companies will, as Alba has done, pursue a distinct product complementary to those already or in the process of being offered.
“Space is a good place to be right now,” he concluded.