FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — January 21st 2020Your RSS feeds

Apple TV+ scores Julia Louis-Dreyfus and Meryl Streep, announces release dates for new shows

By Sarah Perez

Apple has scored more big names for its newly launched streaming service, Apple TV+, including “Veep” and “Seinfeld” star Julia Louis-Dreyfus, as well as Meryl Streep, the latter who’s attached to an animated short film about Earth Day, set to premiere on April 17. In addition, Apple has now announced several new series for Apple TV+, plus renewals and premiere dates for others.

The upcoming Earth Day film, titled “Here We Are: Notes for Living on Planet Earth” will also star the voice talents of “Room” actor Jacob Tremblay as a 7-year old child who learns about the planet, and Chris O’Dowd and Ruth Negga as his parents. Streep will provide the voiceover narration.

Meanwhile, Louis-Dreyfus hasn’t announced specific details of her projects. Apple says she’s inked an overall deal with Apple TV+ as both an executive producer and star — her first overall deal with a streaming service. Under the multi-year agreement, Louis-Dreyfus will create multiple new projects exclusively for Apple TV+.

Joked the actress: “I am thrilled about this new partnership with my friends at Apple. Also, many thanks and kudos to my representatives for structuring the deal in such a way that I am paid in AirPods,” she said.

Apple has previously signed other overall deals with names like Alfonso Cuaron, Kerry Ehrin, Jon M. Chu, Justin Lin, Jason Katims, Lee Eisenberg, as well as studios A24 and Imagine Documentaries, and Oprah.

In addition to the big-name talent grabs, Apple also on Friday announced a new documentary series, “Dear…,” from Emmy and Peabody winner R.J. Cutler. Due out this spring, the series will profile internationally known leaders including Oprah Winfrey, Gloria Steinem, Spike Lee, Lin-Manuel Miranda, Yara Shahidi, Stevie Wonder, Aly Raisman, Misty Copeland, Big Bird (uh, what?) and others.

This is not Apple TV+’s first documentary. It’s currently airing the Peace Award winner “The Elephant Queen,” about a tribe of African elephants. And while not a documentary, per se, the service is also now featuring real life-inspired tales of immigrants in the U.S. in the Apple TV+ anthology series, “Little America” which have a documentary-like vibe. Other documentary series and films in the works include “Visible: Out on Television” “Home,” “Beastie Boys Story” and “Dads.”

Newly announced “Visible…,” exec-produced by Ryan White, Jessica Hargrave, Wanda Sykes, and Wilson Cruz focuses on the LGBTQ movement and its impact on television. Premiering on Valentine’s Day (Feb. 14), the series will also feature narration from Janet Mock, Margaret Cho, Asia Kate Dillon, Neil Patrick Harris, and Lena Waithe.

 

Another new show is “Central Park,” an animated musical comedy from Loren Bouchard (“Bob’s Burgers”), executive producer Josh Gad (“Frozen”) and executive producer Nora Smith (“Bob’s Burgers”), will arrive this summer. The show features a family that lives in Central Park, the Tillermans, and includes a voice cast with the talents of Josh Gad, Leslie Odom Jr., Kristen Bell, Kathryn Hahn, Tituss Burgess, Daveed Diggs, and Stanley Tucci. The animation style has the distinct look of “Bob’s Burgers” as well.

Apple’s first original series from the U.K., “Trying,” will premiere on May 1st globally. This series stars Rafe Spall and Esther Smith, hails from BBC Studios, and was written by Andy Wolton. As the name hints, the story is about a couple — Jason and Nikki — who are trying to have a baby. But Apple describes the show’s larger theme as one about “growing up, settling down and finding someone to love.”

A new thriller, “Defending Jacob,” based on the 2012 NYT bestseller of the same name, will premiere April 24.

The limited series stars Chris Evans, Michelle Dockery, Jaeden Martell, Cherry Jones, Pablo Schreiber, Betty Gabriel, and Sakina Jaffrey, and tells of a shocking crime that rocks a small Massachusetts town. The story follows an Assistant District Attorney who is torn between duty to uphold justice and his love for his son. Academy Award winner J.K. Simmons guest stars.

Apple also announced its live-action comedy that follows a team of video game developers, “Mythic Quest: Raven’s Banquet,” has been renewed for a second season ahead of its global premiere date of Feb. 7.

The show was co-created by Rob McElhenney, Charlie Day and Megan Ganz, and also stars McElhenney as the fictional company’s creative director, Ian Grimm.

Other shows awarded a second season include “Little America,” “Dickinson,” “See,” “Servant,” “For All Mankind,” “The Morning Show,” and the soon-to-premiere “Home Before Dark.”

Despite not sharing any sort of viewership data — even with the shows’ stars — the renewals speak to Apple’s confidence in its original programming.

“Home Before Dark” is a dramatic mystery series featuring young investigative journalist, Hilde Lysiak, and is exec-produced by Jon M. Chu. Based on the real-life kid reporter of the same name, the series takes Hilde’s story into fictional territory by telling a tale of a young girl who moves from Brooklyn to a small lakeside town where she ends up unearthing a cold case that everyone in town, including her dad, has tried to bury. The real Lysiak, however, runs an online news operation, Orange Street News, which made headlines when the then 11-year old girl scooped local news outlets by being the first to expose a murder in her hometown of Selinsgrove, PA.

Steven Spielberg’s “Amazing Stories” has also now been given a premiere date of March 6. The rebooted anthology series is run by Eddy Kitsis and Adam Horowitz (“Lost”), and features episode directors Chris Long (“The Americans,” “The Mentalist”), Mark Mylod (“Succession,” “Game of Thrones”), Michael Dinner (“Unbelievable,” “Sneaky Pete”), Susanna Fogel (“Utopia,” “Play By Play”) and Sylvain White (“Stomp The Yard,” “The Rookie”).

Also previously announced, Apple set a premiere date for the new documentary series “Home,” which will air on April 17. The series offers viewers a look inside some of the world’s most innovative homes around the world.

Though only two months old, Apple TV+ has already landed its first Hollywood industry award, as “The Morning Show” star Jennifer Aniston snagged a SAG Award for best female actor in a drama. Co-star Billy Crudup also won a Critics’ Choice Award for best-supporting actor.

“The Morning Show,” meanwhile, had been nominated for three Golden Globes, but didn’t win. However, the Globes largely snubbed streamers this year with Netflix earning only two wins, despite 34 nominations.

Google Cloud lands Lufthansa Group and Sabre as new customers

By Frederic Lardinois

Google’s strategy for bringing new customers to its cloud is to focus on the enterprise and specific verticals like healthcare, energy, financial service and retail, among others. It’s healthcare efforts recently experienced a bit of a setback, with Epic now telling its customers that it is not moving forward with its plans to support Google Cloud, but in return, Google now got to announce two new customers in the travel business: Lufthansa Group, the world’s largest airline group by revenue, and Sabre, a company that provides backend services to airlines, hotels and travel aggregators.

For Sabre, Google Cloud is now the preferred cloud provider. Like a lot of companies in the travel (and especially the airline) industry, Sabre runs plenty of legacy systems and is currently in the process of modernizing its infrastructure. To do so, it has now entered a 10-year strategic partnership with Google “to improve operational agility while developing new services and creating a new marketplace for its airline,  hospitality and travel agency customers.” The promise, here, too, is that these new technologies will allow the company to offer new travel tools for its customers.

When you hear about airline systems going down, it’s often Sabre’s fault, so just being able to avoid that would already bring a lot of value to its customers.

“At Google we build tools to help others, so a big part of our mission is helping other companies realize theirs. We’re so glad that Sabre has chosen to work with us to further their mission of building the future of travel,” said Google CEO Sundar Pichai . “Travelers seek convenience, choice and value. Our capabilities in AI and cloud computing will help Sabre deliver more of what consumers want.”

The same holds true for Google’s deal with Lufthansa Group, which includes German flag carrier Lufthansa itself, but also subsidiaries like Austrian, Swiss, Eurowings and Brussels Airlines, as well as a number of technical and logistics companies that provide services to various airlines.

“By combining Google Cloud’s technology with Lufthansa Group’s operational expertise, we are driving the digitization of our operation even further,” said Dr. Detlef Kayser, Member of the Executive Board of the Lufthansa Group. “This will enable us to identify possible flight irregularities even earlier and implement countermeasures at an early stage.”

Lufthansa Group has selected Google as a strategic partner to “optimized its operations performance.” A team from Google will work directly with Lufthansa to bring this project to life. The idea here is to use Google Cloud to build tools that help the company run its operations as smoothly as possible and to provide recommendations when things go awry due to bad weather, airspace congestion or a strike (which seems to happen rather regularly at Lufthansa these days).

Delta recently launched a similar platform to help its employees.

Canonical’s Anbox Cloud puts Android in the cloud

By Frederic Lardinois

Canonical, the company behind the popular Ubuntu Linux distribution, today announced the launch of Anbox Cloud, a new platform that allows enterprises to run Android in the cloud.

On Anbox Cloud, Android becomes the guest operating system that runs containerized applications. This opens up a range of use cases, ranging from bespoke enterprise apps to cloud gaming solutions.

The result is similar to what Google does with Android apps on Chrome OS, though the implementation is quite different and is based on the LXD container manager, as well as a number of Canonical projects like Juju and MAAS for provisioning the containers and automating the deployment. “LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity,” the company points out in its announcements.

Anbox itself, it’s worth noting, is an open-source project that came out of Canonical and the wider Ubuntu ecosystem. Launched by Canonical engineer Simon Fels in 2017, Anbox runs the full Android system in a container, which in turn allows you to run Android application on any Linux-based platform.

What’s the point of all of this? Canonical argues that it allows enterprises to offload mobile workloads to the cloud and then stream those applications to their employees’ mobile devices. But Canonical is also betting on 5G to enable more use cases, less because of the available bandwidth but more because of the low latencies it enables.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, director of Product at Canonical, in today’s announcement. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”

Outside of the enterprise, one of the use cases that Canonical seems to be focusing on is gaming and game streaming. A server in the cloud is generally more powerful than a smartphone, after all, though that gap is closing.

Canonical also cites app testing as another use case, given that the platform would allow developers to test apps on thousands of Android devices in parallel. Most developers, though, prefer to test their apps in real — not emulated — devices, given the fragmentation of the Android ecosystem.

Anbox Cloud can run in the public cloud, though Canonical is specifically partnering with edge computing specialist Packet to host it on the edge or on-premise. Silicon partners for the project are Ampere and Intel .

Facebook speeds up AI training by culling the weak

By Devin Coldewey

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Eric Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9 percent success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one bedroom apartment, it’s much easier to do that than navigate a ten bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijman. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijman explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9 percent reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed towards the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijman. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions, and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

Roku expands to Brazil, launches Roku TV featuring Globoplay in partnership with AOC

By Sarah Perez

Roku announced this morning it’s expanding into Brazil, a sizable market that could have a notable impact on the streaming device maker’s advertising business. The company last summer was said to be considering a Brazil expansion, at a time when over 90% of its advertising business’s revenue came from the U.S. Since then, Roku expanded its TV licensing program to Europe, and at this year’s Consumer Electronics Show revealed plans to add new international Roku TV partners, including its first in the U.K., bringing its total global brand partners to 15.

In Brazil, Roku is entering the market by way of a partnership with AOC to launch the AOC Roku TV in the country. The TV will come in two models: a 32-inch HD TV with integrated wired and wireless connectivity ($1.199,00 Reais), and a 43-inch FHD TV also with both wired and wireless connectivity ($1.599,00 Reais).

Like other Roku TVs, the TV will also offer Roku’s personalized home screen, built-in search, a Roku remote with dedicated channel shortcut buttons, automatic software updates, and support for the Roku mobile app. The Roku Channel Store in Brazil will allow customers to choose from over 5,000 streaming channels for watching movies and TV.

One of the first market-specific channels will be local streaming service Globoplay, which will offer users live TV as well as on-demand movies and TV shows. It also will grab one of the coveted shortcut buttons on the Roku remote control.

“The partnership with Roku has a strategic importance for the development of the streaming market in Brazil. Globoplay content will allow Roku to have an excellent position in our market in the long run,” said Fernando Ramos, Executive Director of G2C Globo, in a statement. “Given the importance of the Roku platform in other countries, we believe Roku has a great opportunity in our country,”

Other services available at launch include the Apple TV app (Apple TV+), BabyFirst TV, sports streaming service DAZN, Deezer, Google Play, Happy Kids, HBO GO, Brazilian streaming service Looke, Netflix, Spotify, and YouTube.

“I’m delighted to bring Roku to Brazil, one of the largest streaming markets in the world,” said Anthony Wood, Founder and CEO of Roku, in a statement. “With the arrival of Roku, consumers in Brazil will now be able to enjoy their favorite TV programs and movies on the easy to use Roku platform. We want to bring streaming to everyone in Brazil.”

The expansion should have an impact in terms of Roku’s revenue — the majority of which today comes from its advertising business, not its device sales. In Q3 2019, Roku platform revenue, led by advertising, reached $179.3 million versus $81.6 million for device revenue. However, most of Roku’s focus to date has been on the North American market, where it’s now the No. 1 licensed TV operating system and No. 1 media platform in the U.S. by hours streamed.

Roku’s rival, Amazon Fire TV, meanwhile, has focused more on international markets. In addition to the U.S., Fire TV has been available in Canada, the U.K., Germany, Ireland, Austria, and India, and Amazon plans to launch more Fire TV Edition smart TVs in these markets and others (including Italy, Spain, and Mexico) in 2020.

The two are often very close in active user numbers, with Amazon Fire TV having just announced over 40 million active users — more than the 32.3 million Roku reported in Q3. (It will report Q4 results in February, where those numbers may be updated).

The new AOC Roku TVs will be available starting January 22 in Casas Bahia, Ponto Frio and Extra, and in other stores as of early February.

 

Skylo raises $103 million to affordably connect the Internet of Things to satellite networks

By Darrell Etherington

One of the biggest opportunities in the new space economy lies in taking the connectivity made possibly by ever-growing communications satellite constellations, and making that useful for things and companies here on Earth. Startup Skylo, which emerged from stealth today with a $103 million Series B funding announcement, is one of the players making that possible in an affordable way.

The funding brings Skylo’s total raised to $116 million, following a $14 million Series A. This new round was led by Softbank Group (which at this point carries a complicated set of connotations) and includes existing investors DCM and Eric Schmidt’s Innovation Endeavors. Skylo’s business is based on connecting Internet of Things (IoT) devices, including sensors, industrial equipment, logistics hardware and more, to satellite networks using the cellular-based Narrowband IoT protocol. Its network is already deployed on current geostationary satellites, too, meaning its customers can get up and running without waiting for any new satellites or constellations with dedicated technology to launch.

Already, Skylo has completed tests of its technology with commercial partners in real-world usage, including partners in private enterprise and government, across industries including fisheries, maritime logistics, automotive and more. The company’s main claim to advantage over other existing solutions is that it can offer connectivity for as little as $1 per seat, along with hardware that sells for under $100, which it says adds up to a cost savings of as much as 95 percent vs. other satellite IoT connectivity available on the market.

Its hardware, the Skylo Hub, is a satellite terminal that connects to its network on board geostationary satellites, acting as a “hot spot” to make that available to standard IoT sensors and devices. It’s roughly 8″ by 8″, can be powered internally via battery or plugged in, and is easy for customers to install on their own without any special expertise.

The company was founded in 2017, by CEO Parth Trivedi, CTO Dr. Andrew Nuttall and Chief Hub Architect Dr. Andrew Kalman. Trivedi is an MIT Aerospace and Astronautical engineering graduate; Nuttal has a Ph.D in Aeronautics from Stanford, and Kalman is a Stanford professor who previously founded CubeSat component kit startup Pumpkin, Inc.

Capella Space reveals new satellite design for real-time control of high-resolution Earth imaging

By Darrell Etherington

Satellite and Earth observation startup Capella Space has unveiled a new design for its satellite technology, which improves upon its existing testbed hardware platform to deliver high-resolution imaging capable of providing detail at under 0.5 meters (1.6 feet). Its new satellite, cod-named “Sequoia,” will also be able to provide real-time tasking, meaning Capella’s clients will be able to get imaging from these satellites of a desired area basically on demand.

Capella’s satellites are ‘synthetic aperture radar’ (SAR for short) imaging satellites, which mean they’re able to provide 2D images of the Earth’s surface even through cloud cover, or when the area being imaged is on the night side of the planet. SAR imaging resolution is typically much higher than the 0.5-meter range that Capella’s new design will enable – and it’s especially challenging to get that kind of performance from small satellites, which is what Sequoia will be.

The new satellite design is a “direct result of customer feedback,” Capella says, and includes advancements like an improved solar array for faster charging and quicker recycling; better thermals to allow it to image for longer stretches at a time; a much more agile targeting array that means it can switch targets much more quickly in response to customer needs; and a higher bandwidth downlink, meaning it can transfer more data per orbital pass than any other SAR system from a commercial company in its size class.

This upgrade led to Capella Space locking in contracts with major U.S. government clients, including the  U.S. Air Force and the National Reconnaissance Office (NRO). And the tech is ready to fly – it’ll be incorporated into Capella’s next six commercial satellites, which are set to fly starting in March.

AI Can Do Great Things—If It Doesn't Burn the Planet

By Will Knight
The computing power required for AI landmarks, such as recognizing images and defeating humans at Go, increased 300,000-fold from 2012 to 2018. 

Worried About Privacy at Home? There's an AI for That

By Clive Thompson
How edge AI will provide devices with just enough smarts to get the job done without spilling all your secrets to the mothership.

The Secret History of Facial Recognition

By Shaun Raviv
Sixty years ago, a sharecropper’s son invented a technology to identify faces. Then the record of his role all but vanished. Who was Woody Bledsoe, and who was he working for?

Gartner: 2020 device shipments to grow 0.9% to 2.16B thanks to 5G, before 2 further years of decline

By Ingrid Lunden

The analysts at Gartner have published their annual global device forecast, and while 2020 looks like it may be partly sunny, get ready for more showers and poor weather ahead. The analysts predict that a bump from new 5G technology will lead to total shipments of 2.16 billion units — devices that include PCs, mobile handsets, watches, and all sizes of computing devices in between — working out to a rise of 0.9% compared to 2019.

That’s a modest reversal after what was a rough year for hardware makers who battled with multiple headwinds that included — for mobile handsets — a general slowdown in renewal cycles and high saturation of device ownership in key markets; and — in PCs — the wider trend of people simply buying fewer of these bigger machines as their smartphones get smarter (and bigger).

As a point of comparison, last year Gartner revised its 2019 numbers at least three times, starting from “flat shipments” and ending at nearly four percent decline. In the end, 2019 saw shipments of 2.15 billion units — the lowest number since 2010. All of it is a bigger story of decline. In 2005, there were between 2.4 billion and 2.5 billion devices shipped globally.

“2020 will witness a slight market recovery,” writes Ranjit Atwal, research senior director at Gartner . “Increased availability of 5G handsets will boost mobile phone replacements, which will lead global device shipments to return to growth in 2020.”

(Shipments, we should note, do not directly equal sales, but they are used as a marker of how many devices are ordered in the channel for future sales. Shipments precede sales figures: overestimating results in oversupply and overall slowdown.)

The idea that 5G will drive more device sales, however, is still up for debate. Some have argued that while carriers are going hell for leather in their promotion of 5G, the idea of special 5G apps and services — versus using it to connect machines in an IoT play — that will spur adoption of those devices is not as apparent, and that’s leading to it being more of an abstract concept, and not one that is leading the charge when it comes to apps and services, especially for the mass consumer market and for (human) business users.

In 6 years of hearing pitches in Silicon Valley, I heard '5G' maybe once. That's not from ignorance – the utility network layer is not very important to innovation at the top of the stack.

— Benedict Evans (@benedictevans) January 20, 2020

Still, it may be that hardware might march on ahead regardless. Gartner predicts that 5G devices will account for 12% of all mobile phone shipments in 2020 as handset makers make their devices “5G ready,” with the proportion increasing to 43% by 2022. “From 2020, Gartner expects an increase in 5G phone adoption as prices decrease, 5G service coverage increases and users have better experiences with 5G phones,” writes Atwal. “The market will experience a further increase in 2023, when 5G handsets will account for over 50% of the mobile phones shipped.” That may in part be simply because handset makers are making their devices “5G ready”

Drilling down into the numbers, Gartner believes that worldwide, phones will see a bump of 1.7% this year, up to 1.78 billion before declining again in 2021 to 1.77 billion and then further in 2022 to 1.76 billion. Asia and in particular China and emerging markets will lead the charge.

Another analyst firm, Counterpoint, has been tracking marketshare for individual handset makers and notes that Samsung remains the world’s biggest handset maker going into Q4 2019 (final numbers on that quarter should be out in the coming weeks), with 21% of all shipments and slight increases over the year, but with the BBK group (which owns OPPO, Vivo, Realme, and OnePlus) likely to pass it, Huawei and Apple to become the world’s largest, as it’s growing much faster. Numbers overall were dragged down by declines for Apple, the world’s number-three handset maker, which saw a slump last year in its handset sales.

Although the market was generally lower across all devices, PC shipments actually saw some growth in 2019. That is set to turn down again this year, to 251 million units, and declining further to 247 million in 2021 and 242 million in 2022.

Part of that is due to slower migration trends — Windows 10 adoption was the primary driver for people switching up and buying new devices last year, but now that’s more or less finished. That will see slower purchasing among enterprise end users, although later adopters in the SME segment will finally make the change when support for Windows it 7 finally ends this month (it’s been on the cards for years at this point). In any case, the upgrade cycle is changing because of how Windows is evolving.

“The PC market’s future is unpredictable because there will not be a Windows 11. Instead, Windows 10 will be upgraded systematically through regular updates,” writes Atwal “As a result, peaks in PC hardware upgrade cycles driven by an entire Windows OS upgrade will end.”

Two trends that might impact shipments — or at least highlight other currents in the hardware market — should also be noted. The first is the role that Chromebooks might play in the PC market. These were one of the faster-growing categories last year, and this year we will see even more models rolled out, with what hardware makers hope will be even more of a boost in functionality to drive adoption. (Google and Intel’s collaboration is one example of how that will work: the two are working on a set of standards that will fit with chips made by Intel to produce what the companies believe are more efficient and compelling notebooks, with tablet-like touchscreens, better battery life, smaller and lighter form factors, and more.)

The second is whether or not smartwatches will make a significant dent into the overall device market. Q3 of last year saw growth of 42% to 14 million shipments globally. And while there have been a number of smartwatch hopefuls, but one of the biggest successes has been the Apple Watch, whose growth outstripped that of the wider watch market, at 51%. Indeed, looking at the results of the last several quarters, Apple’s product category that includes Watch sales (wearables, home and accessories) even appears to be on track to outstrip another hardware category, Macs. Whether that will continue, and potentially see others joining in, will be an interesting area to “watch.”

Rocket Lab’s first launch of 2020 is a mission for the National Reconnaissance Office

By Darrell Etherington

Rocket Lab has announced its first mission for 2020 – a dedicated rocket launch on behalf of client the U.S. National Reconnaissance Office (NRO) with a launch window that opens on January 31. The Electron rocket Rocket Lab is using for this mission will take off from its Launch Complex 1 (LC-1) in New Zealand, and it’ll be the first mission Rocket Lab secured under a new contract the NRO is using that allows it to source launch providers quickly and at short notice.

This new Rapid Acquisition of a Small Rocket (RASR) contract model is pretty much ideal for Rocket Lab, since the whole company’s thesis is based around using small, affordable rockets that can be produced quickly thanks to carbon 3D printing used in the manufacturing process. Rocket Lab has already demonstrated the flexibility of its model by bumping a client to the top of the queue when another dropped out last year, and its ability to win an NRO mission under the RASR contract model is further proof that its aim of delivering responsive, timely rocket launch services for small payloads is hitting a market sweet spot.

The NRO is a U.S. government agency that’s in charge of developing, building, launching and operating intelligence satellites. It was originally established in 1961, but was only officially declassified and made public in 1992. Its mandate includes supporting the work of both the U.S. Intelligence Community, as well as the Department of Defense.

Increasingly, the defense industry is interested in small satellite operations, mainly because using smaller, more efficient and economical satellites means that you can respond to new needs in the field more quickly, and that you can also build resiliency into your observation and communication network through sheer volume. Traditional expensive, huge intelligence and military satellites carry giant price tags, have multi-year development timelines and offer sizeable targets to potential enemies without much in the way of redundancy. Small satellites, especially acting as part of larger constellations, mitigate pretty much all of these potential weaknesses.

One of the reasons that Rocket Lab opened its new Launch Complex 2 (LC-2) launch pad in Wallops Island, Virgina, is to better serve customers from the U.S. defense industry. Its first mission from that site, currently set to happen sometime this spring, is for the U.S. Air Force.

Diligent’s Vivian Chu and Labrador’s Mike Dooley will discuss assistive robotics at TC Sessions: Robotics+AI

By Brian Heater

Too often the world of robotics seems to be a solution in search of a problem. Assistive robotics, on the other hand, are among one of the primary real-world tasks existing technology can seemingly address almost immediately.

The concept for the technology has been around for some time now and has caught on particularly well in places like Japan, where human help simply can’t keep up with the needs of an aging population. At TC Sessions: Robotics+AI at U.C. Berkeley on March 3, we’ll be speaking with a pair of founders developing offerings for precisely these needs.

Vivian Chu is the cofounder and CEO of Diligent Robotics. The company has developed the Moxi robot to help assist with chores and other non-patient tasks, in order to allow caregivers more time to interact with patients. Prior to Diligent, Chu worked at both Google[X] and Honda Research Institute.

Mike Dooley is the cofounder and CEO of Labrador Systems. The Los Angeles-based company recently closed a $2 million seed round to develop assistive robots for the home. Dooley has worked at a number of robotics companies including, most recently a stint as the VP of Product and Business Development at iRobot.

Early Bird tickets are now on sale for $275, but you better hurry, prices go up in less than a month by $100. Students can book a super discounted ticket for just $50 right here.

Yesterday — January 20th 2020Your RSS feeds

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

By Natasha Lomas

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.

— Jonathan Senchyne (@jsench) January 16, 2020

Google takes on AWS and Azure in India with Airtel cloud deal

By Manish Singh

Google has inked a deal with India’s third-largest telecom operator as the American giant looks to grow its cloud customer base in the key overseas market that is increasingly emerging as a new cloud battleground for AWS and Microsoft .

Google Cloud announced on Monday that the new partnership, effective starting today, enables Airtel to offer G Suite to small and medium-sized businesses as part of the telco’s ICT portfolio.

Airtel, which has amassed over 325 million subscribers in India, said it currently serves 2,500 large businesses and over 500,000 small and medium-sized businesses and startups in the country. The companies did not share details of their financial arrangement.

In a statement, Thomas Kurian, chief executive of Google Cloud, said, “the combination of G Suite’s collaboration and productivity tools with Airtel’s digital business offerings will help accelerate digital innovations for thousands of Indian businesses.”

The move follows Reliance Jio, India’s largest telecom operator, striking a similar deal with Microsoft to sell cloud services to small businesses. The two announced a 10-year partnership to “serve millions of customers.”

AWS, which leads the cloud market, interestingly does not maintain any similar deals with a telecom operator — though it did in the past. Deals with carriers, which were very common a decade ago as tech giants looked to acquire new users in India, illustrates the phase of the cloud adoption in the nation.

Nearly half a billion people in India came online last decade. And slowly, small businesses and merchants are also beginning to use digital tools, storage services, and accept online payments. According to a report by lobby group Nasscom, India’s cloud market is estimated to be worth more than $7 billion in three years.

Like in many other markets, Amazon, Microsoft, and Google are locked in an intense battle to win cloud customers in India. All of them offer near identical features and are often willing to pay out a potential client’s remainder credit to the rival to convince them to switch, industry executives have told TechCrunch.

Before yesterdayYour RSS feeds

Shadows’ Dylan Flinn and Kombo’s Kevin Gould on the business of ‘virtual influencers’

By Eric Peckham

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 2 of 3: the business of virtual influencers

Today’s discussion focuses on virtual influencers: fictional characters that build and engage followings of real people over social media. To explore the topic, I spoke with two experienced entrepreneurs:

  • Dylan Flinn is CEO of Shadows, an LA-based animation studio that’s building a roster of interactive characters for social media audiences. Dylan started his career in VC, funding companies such as Robinhood, Patreon and Bustle, and also spent two years as an agent at CAA.
  • Kevin Gould is CEO of Kombo Ventures, a talent management and brand incubation firm that has guided the careers of top influencers like Jake Paul and SSSniperWolf. He is the co-founder of three direct-to-consumer brands — Insert Name Here, Wakeheart and Glamnetic — and is an angel investor in companies like Clutter, Beautycon and DraftKings.

Compound’s Mike Dempsey on virtual influencers and AI characters

By Eric Peckham

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 1 of 3: the investor perspective

In a series of three interviews, I’m exploring the startup opportunities in both of these spaces in greater depth. First, Michael Dempsey, a partner at VC firm Compound who has blogged extensively about digital characters, avatars and animation, offers his perspective as an investor hunting for startup opportunities within these spaces.

Apple's Latest Deal Shows How AI Is Moving Right Onto Devices

By Will Knight
The iPhone maker's purchase of startup Xnor.ai is the latest move toward a trend of computing on the "edge," rather than in the cloud. 

Artist Refik Anadol Turns Data Into Art, With Help From AI

By Tom Simonite
He sees pools of data as raw material for visualizations that he calls a new kind of “sculpture.”

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

By Natasha Lomas

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

❌