Germany’s top soccer (football) league, Bundesliga, announced today it is partnering with AWS to use artificial intelligence to enhance the fan experience during games.
Andreas Heyden, executive vice president for digital sports at the Deutsche Fußball Liga, the entity that runs The Bundesliga, says that this could take many forms, depending on whether the fan is watching a broadcast of the game or interacting online.
“We try to use technology in a way to excite a fan more, to engage a fan more, to really take the fan experience to the next level, to show relevant stats at the relevant time through broadcasting, in apps and on the web to personalize the customer experience,” Heyden said.
This could involve delivering personalized content. “In times like this when attention spans are shrinking, when a user when a user opens up the app the first message should be the most relevant message in that context in that time for the specific user,” he said.
It can also help provide advanced statistics to fans in real time, even going so far as to predict the probability of a goal being scored at any particular moment in a game that would have an impact on your team. Heyden thinks of it as telling a story with numbers, rather than reporting what happened after the fact.
“We want to, with the help of technology, tell stories that could not have been told without the technology. There’s no chance that a reporter could come up with a number of what the probability of a shot [scoring in a given moment]. AWS can,” he said.
Werner Vogels, CTO at Amazon, says this about using machine learning and other technologies on the AWS platform to add to the experience of watching the game, which should help attract younger fans, regardless of the sport. “All of these kind of augmented customer fan experiences are crucial in engaging a whole new generation of fans,” Vogels told TechCrunch.
He adds that this kind of experience simply wasn’t possible until recently because the technology didn’t exist. “These things were impossible five or 10 years ago, mostly because now with all the machine learning software, as well as how the [pace of technology] has accelerated at such a [rate] at AWS, we’re now able to do these things in real time for sports fans.”
Bundesliga is not just any football league. It is the second biggest in the world in terms of revenue and boasts the highest stadium attendance of all football teams worldwide. Today’s announcement is an extension of an ongoing relationship between DFL and AWS, which started in 2015 when Heyden helped move the league’s operations to the cloud on AWS.
Heyden says that it’s not a coincidence he ended up using AWS instead of another cloud company. He has known Vogels (who also happens to be a huge soccer fan) for many years, and has been using AWS for more than a decade, even well before he joined the DFL. Today’s announcement is an extension of that long-term relationship.
TechCrunch Sessions: Robotics+AI 2020 is gearing up to be one amazing show. This annual day-long event draws the brightest minds and makers from these two industries — 1,500 attendees last year alone. And if you really want to make 2020 a game-changing year, grab yourself an early-bird ticket and save $150 on tickets before prices go up after January 31.
Not convinced yet? Check out some agenda highlights featuring some of today’s leading robotics and AI leaders:
See the full agenda here.
If you’re a startup, nab one of the five demo tables left and showcase your company to new customers, press, and potential investors. Demo tables run $2,200 and come with four attendee tickets so you can divide and conquer the networking scene at the conference.
Students, get your super-reduced $50 ticket here and learn from some of the biggest names in the biz and meet your future employer or internship opportunity.
Don’t forget, the early-bird ticket sale ends on January 31. After that, prices go up by $150. Purchase your tickets here and save an additional 18% when you book a group of four or more.
While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.
When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
The Met failed to consider the human rights impact of the tech
Its use was unlikely to pass the key legal test of being "necessary in a democratic society"
— Liberty (@libertyhq) January 24, 2020
A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.
Uber Advanced Technologies Group will start mapping Washington, D.C., ahead of plans to begin testing its self-driving vehicles in the city this year.
Initially, there will be three Uber vehicles mapping the area, a company spokesperson said. These vehicles, which will be manually driven and have two trained employees inside, will collect sensor data using a top-mounted sensor wing equipped with cameras and a spinning lidar. The data will be used to build high-definition maps. The data will also be used for Uber’s virtual simulation and test track testing scenarios.
Uber intends to launch autonomous vehicles in Washington, D.C. before the end of 2020.
At least one other company is already testing self-driving cars in Washington, D.C. Ford announced in October 2018 plans to test its autonomous vehicles in Washington, D.C. Argo AI is developing the virtual driver system and high-definition maps designed for Ford’s self-driving vehicles.
Argo, which is backed by Ford and Volkswagen, started mapping the city in 2018. Testing was expected to begin in the first quarter of 2019.
Uber ATG has kept a low profile ever since one of its human-supervised test vehicles struck and killed a pedestrian in Tempe, Arizona in March 2018. The company halted its entire autonomous vehicle operation immediately following the incident.
Nine months later, Uber ATG resumed on-road testing of its self-driving vehicles in Pittsburgh, following a Pennsylvania Department of Transportation decision to authorize the company to put its autonomous vehicles on public roads. The company hasn’t resumed testing in other markets such as San Francisco.
Uber is collecting data and mapping in three other cities in Dallas, San Francisco and Toronto. In those cities, just like in Washington, D.C., Uber manually drives its test vehicles.
Uber spun out the self-driving car business in April 2019 after closing $1 billion in funding from Toyota, auto-parts maker Denso and SoftBank’s Vision Fund. The deal valued Uber ATG at $7.25 billion, at the time of the announcement. Under the deal, Toyota and Denso are providing $667 million, with the Vision Fund throwing in the remaining $333 million.
It’s one thing to develop a working machine learning model, it’s another to put it to work in an application. Cortex Labs is an early stage startup with some open source tooling designed to help data scientists take that last step.
The company’s founders were students at Berkeley when they observed that one of the problems around creating machine learning models was finding a way to deploy them. While there was a lot of open source tooling available, data scientists are not experts in infrastructure.
CEO Omer Spillinger says that infrastructure was something the four members of the founding team — himself, CTO David Eliahu, head of engineering Vishal Bollu and head of growth Caleb Kaiser — understood well.
What the four founders did was take a set of open source tools and combine them with AWS services to provide a way to deploy models more easily. “We take open source tools like TensorFlow, Kubernetes and Docker and we combine them with AWS services like CloudWatch, EKS (Amazon’s flavor of Kubernetes) and S3 to basically give one API for developers to deploy their models,” Spillinger explained.
He says that a data scientist starts by uploading an exported model file to S3 cloud storage. “Then we pull it, containerize it and deploy it on Kubernetes behind the scenes. We automatically scale the workload and automatically switch you to GPUs if it’s compute intensive. We stream logs and expose [the model] to the web. We help you manage security around that, stuff like that,” he said
While he acknowledges this not unlike Amazon SageMaker, the company’s long-term goal is to support all of the major cloud platforms. SageMaker of course only works on the Amazon cloud, while Cortex will eventually work on any cloud. In fact, Spillinger says that the biggest feature request they’ve gotten to this point, is to support Google Cloud. He says that and support for Microsoft Azure are on the road map.
The Cortex founders have been keeping their head above water while they wait for a commercial product with the help of an $888,888 seed round from Engineering Capital in 2018. If you’re wondering about that oddly specific number, it’s partly an inside joke — Spillinger’s birthday is August 8th — and partly a number arrived at to make the valuation work, he said.
For now, the company is offering the open source tools, and building a community of developers and data scientists. Eventually, it wants to monetize by building a cloud service for companies who don’t want to manage clusters — but that is down the road, Spillinger said.
TikTok, the fast-growing user-generated video app from China’s Bytedance, has been building a new music streaming service to compete against the likes of Spotify, Apple Music and Amazon Music. And today it’s announcing a deal that helps pave the way for a global launch of it. It has inked a licensing deal with Merlin, the global agency that represents tens of thousands of independent music labels and hundreds of thousands of artists, for music from those labels to be used legally on the TikTok platform anywhere that the app is available.
The news is significant because this is the first major music licensing deal signed by TikTok as part of its wider efforts in the music industry. That includes both its mainstay short-form videos — where music plays a key role (the app, before it was acquired by Bytedance, was even called ‘Musically’) — as well as new music streaming services.
Specifically, a source close to TikTok has confirmed to TechCrunch that this Merlin deal covers its upcoming music subscription service Resso.
Resso was long rumoured and eventually spotted in the wild at the end of last year when Bytedance tested the app in India and Indonesia. Bytedance owns the Resso trademark, so it’s a good bet that it will make its way to more markets soon. (Possibly with features that differentiate this later entrant from others in the market? Recall Bytedance acquired an AI-based music startup called Jukedeck last year.)
“Independent artists and labels are such a crucial part of music creation and consumption on TikTok,” said Ole Obermann, global head of music for Bytedance and TikTok, in a statement. “We’re excited to partner with Merlin to bring their family of labels to the TikTok community. The breadth and diversity of the catalogue presents our users with an even larger canvas from which to create, while giving independent artists the opportunity to connect with TikTok’s diverse community.”
Music is a fundamental part of the TikTok experience, and this deal covers everything that’s there today — videos created by TikTok users, sponsored videos created for marketing — as well as whatever is coming up around the corner.
A music streaming app, which TikTok has reportedly been gearing up to launch for some time, is one way that the company could help generate revenue. Despite being one of the most popular apps of 2019, monetisation has largely eluded the company up to now.
One reason why monetising can’t happen is because of the lack of deals at the other end of the chain. As of December, TikTok had yet to sign any deals with the “majors” — Sony Music, Warner Music and Universal Music — and from what we understand Merlin is the first big deal of its kind of the company. However, there are signs that more such agreements may be coming soon. Obermann, who was hired away from Warner Music last year, in turn hired another former Warner colleague, Tracy Gardner, who now leads label licensing for the company. And just yesterday, the company opened an office in Los Angeles, the heart of the music industry.
The move to bring more licensed music usage to TikTok (and other Bytedance apps) is significant for other reasons, too.
On one hand, it’s about labels trying to evolve with the times, collecting revenues wherever audiences happen to be, whether that is in short-form user-generated video, in advertising that runs alongside that, or in a new music service capitalising on the new vogue for streamed media.
“This partnership with TikTok is very significant for us,” said Jeremy Sirota, CEO, Merlin, in a statement. “We are seeing a new generation of music services and a new era of music-related consumption, much of it driven by the global demand for independent music. Merlin members are increasingly using TikTok for their marketing campaigns, and today’s partnership ensures that they and their artists can also build new and incremental revenue streams.”
One the other hand, the deal is significant also because it underscores how TikTok is increasingly working to legitimise itself in the wider tech and media marketplace.
While Bytedance’s acquisition of TikTok continues to face regulatory scrutiny, the company has been working on ways to assert its independence from China’s control, which has included many clarifications about where its content is hosted (not China! it says) and even a search for a new US-based CEO. On another front, more licensing deals should also help the company with the many legal and PR issues that have been hanging over it concerning how it pays out when music is used in its popular app.
After announcing a $550 million fundraise last August, UK AI-based health services startup Babylon Health is putting some of that money to use with its widest-ranging project to date. The company has inked a 10-year deal with the city of Wolverhampton in England to provide an integrated health app covering 300,000 people, the entire population of the city.
The financial terms of the deal are not being disclosed but Babylon confirmed that the NHS is not taking a stake in the startup as part of it. The plan is to start rolling out the first phase of the app by the end of this year.
Babylon Health is known for building AI-based platforms that help diagnose patients’ issues. Babylon’s services are provided as a complement to seeing actual clinicians — the idea being that the interactions and AI can speed up some of the work of getting people seen and into the system. Some of Babylon’s best known work to date has been a chatbot that it built for the NHS in the UK, and in addition to working with a number of private businesses on their employee healthcare services, it is also now in the process of rolling out services in 11 countries in Asia. (In August, Babylon said it was delivering 4,000 clinical consultations each day, or one patient interaction every 10 seconds; covering 4.3 million people worldwide; with more than 1.2 million digital consultations completed to date.)
Even with all these milestones passed — milestones that have helped catapult Babylon to a $2 billion valuation — its latest project will be most ambitious to date: it will be the first time that Babylon works on a project that combines both hospital and primary medical care into an all-in-one app.
“We are extremely proud of this exciting 10-year partnership with RWT which will benefit patients and the NHS as a whole,” said Ali Parsa, CEO and Founder of Babylon, in a statement. “We have over 1,000 AI experts, clinicians, engineers and scientists who will be helping to make Digital-First Integrated Care a reality and provide fast, effective, proactive care to patients. Together with RWT, we can demonstrate this works and help the NHS lead healthcare across the world.”
The plan is for Babylon and the Royal Wolverhampton NHS Trust — the local health authority and body that will oversee the work for the city’s population — to build an app that will not only provide remote diagnoses, but also live monitoring of patients with chronic conditions (using wearables and other monitoring apps) and the ability to connect people with doctors and others remotely.
Other services will include the ability to let patients access their own medical records and review their own consultations; book appointments; renew prescriptions; view a “digital twin” of their own state of health based on medical history and other details; and manage their rehab after a procedure, illness or injury.
The gap in the market that Babylon is tackling is the fact that many countries are seeing populations that are both growing bigger and generally living longer, and that is putting a strain not just on public health services, but also those that are managed completely or partly privately. This has been a particularly painful theme in Babylon’s home market, the UK, where healthcare is nationalised and is regularly facing budgetary and human capital shortages, but there is no infrastructure (or consumer finance) to supplement that for the majority of people.
The aim, however, goes beyond simply filling NHS gaps; it’s also about trying to build services that fit better with how people live, for example to provide them with certain services at home to save them from coming into, say, a hospital to be treated if the condition merits it.
“We know from our active engagement with patients of all ages and backgrounds that they are keen to use technology that will improve access and give them greater control of their own health, wellbeing and social inclusion,” said Trust Chief Executive David Loughton, CBE, in a statement. “For example, it should be normal for a patient with a long-term condition to take a blood-test at home, have the results fed into their app which alerts the specialist if they need an appointment. The patient chooses a time to meet, has the consultation through the app, works with their specialist to build a care plan, and the app encourages them to complete it whilst assessing the impact it’s having. This is our vision for properly joined-up and integrated care.”
AI has become a major theme in the drive to improve healthcare and medicine overall, primarily through two main areas: providing diagnostic and other services to patients in situations, acting in roles that would otherwise be played by humans; and in research, acting as a “super brain” to help perform complex calculations in the quest for better drug discovery, disease pathology and other areas that would take humans far longer to do on their own.
Well aware of the strains on health systems, startups, investors and other stakeholders have jumped into using AI in the hopes of creating more efficiency and potentially better outcomes. But that doesn’t mean that all the outcomes have actually been better. Google’s DeepMind encountered a lot of controversy around how it handled patient data in its own NHS deals, leading to questions and investigations that have now stretched into years. And BenevolentAI — which has been working on drug discovery — found itself raising money last year in round that devalued the loss-making company by half.
Paul Bate, Babylon’s MD of NHS services, noted in an interview that Babylon is mindful of patient privacy and consent, and notes that the service is opt-in and transparent in its data usage when engaging users. He declined to comment on how and when data will be retained by the NHS or by Babylon (or both) but said it would be made clear in the app when it is launched.
“It’s not a simple answer to say whether one body or another will keep it, but it will be transparent, both for US and the NHS, when it launches,” he added.
Google is giving an A.I. upgrade to its Collections feature — basically Google’s own take on Pinterest, but built into Google Search. Originally a name given to organizing images, the Collections feature that launched in 2018 let you save any type of search result — images, bookmarks, or maps locations — into groups called “Collections” for later perusal. Starting today, Google will make suggestions about items you can add to Collections based on your Search history across specific activities like cooking, shopping or hobbies.
The idea here is that people often use Google for research but don’t remember to save web pages for easy retrieval. That leads to users to dig through their Google Search History in an effort to find the lost page. Google believes that A.I. smarts can improve the process, by helping users to build reference collections by starting the process for them.
Here’s how it works. After you’ve visited pages on Google Search in the Google app or on the mobile web, Google will group together similar pages related to things like cooking, shopping, and hobbies then prompt you to save them to suggested Collections.
For example, after an evening of scouring the web for recipes, Google may share a suggested Collection with you titled “Dinner Party” which is auto-populated with relevant pages from your Search history. You can uncheck any recipes that don’t belong and rename the collection from “Dinner Party” to something else of your choosing, if you want. You then tap the “Create” button to turn this selection from your Search history into a Collection.
These Collections can be found later in the Collections tab in the Google app or through the Google.com side menu on the mobile web. There is an option to turn off this feature in Settings, but it’s enabled by default.
The Pinterest-like feature aims to keep Google users from venturing off Google sites to other places where they can save and organize things they’re interested in — whether that’s a list of recipes they want to add to a pinboard on Pinterest or a list of clothing they want to add to a wish list on Amazon. In particular, retaining e-commerce shoppers from leaving Google for Amazon is something the company is heavily focused on these days. The company recently rolled out a big revamp of its Google Shopping vertical and just this month launched a way to shop directly from search results.
The issue with sites like Pinterest is that they’re capturing shoppers at an earlier stage in the buying process — during the information-gathering and inspiration-seeking research stage, that is. By saving links to Pinterest’s pinboards, shoppers ready to make a purchase are bypassing Google (and its advertisers) to check out directly with retailers.
Meanwhile, Google is simultaneously losing traffic to Amazon, which now surpasses Google for product searches. Even Instagram, of all places, has become a rival, as it’s now a place to shop. The app’s Shopping feature is funneling users right from its visual ads to a checkout page in the app. PayPal, catching wind of this trend, recently spent $4 billion to buy Honey in order to capture shoppers earlier in their journey.
For users, Google Collections is just about encouraging you to put your searches into groups for later access. But for Google, it’s also about getting people to shop on Google and stay on Google, no matter what they’re researching. Suggested Collections may lure you in as an easy way to organize recipes, but ultimately this feature will be about getting users to develop a habit of saving their searches to Google — and particularly their product searches.
Once you have a Collection set up, Google can point you to other related items, including websites, images, and more. Most importantly, this will serve as a new way to get users to perform more product searches, too, as it can send users to other product pages without the user having to type in an explicit search query.
The update also comes with an often-requested collaboration feature, which means you can now share a collection with others for either viewing or editing.
Sharing and related content suggestions are live worldwide.
The A.I.-powered suggested collections are live in the U.S. for English users starting today and will reach more markets in time.
Farming is one of the oldest professions, but today those amber waves of grain (and soy) are a test bed for sophisticated robotic solutions to problems farmers have had for millennia. Learn about the cutting edge (sometimes literally) of agricultural robots at TC Sessions: Robotics+AI on March 3 with the founders of Traptic, Pyka and FarmWise.
Traptic, and its co-founder and CEO Lewis Anderson, you may remember from Disrupt SF 2019, where it was a finalist in the Startup Battlefield. The company has developed a robotic berry picker that identifies ripe strawberries and plucks them off the plants with a gentle grip. It could be the beginning of a new automated era for the fruit industry, which is decades behind grains and other crops when it comes to machine-based harvesting.
FarmWise has a job that’s equally delicate yet involves rough treatment of the plants — weeding. Its towering machine trundles along rows of crops, using computer vision to locate and remove invasive plants, working 24/7, 365 days a year. CEO Sebastian Boyer will speak to the difficulty of this task and how he plans to evolve the machines to become “doctors” for crops, monitoring health and spontaneously removing pests like aphids.
Pyka’s robot is considerably less earthbound than those: an autonomous, all-electric crop-spraying aircraft — with wings! This is a much different challenge from the more stable farming and spraying drones like those of DroneSeed and SkyX, but the choice gives the craft more power and range, hugely important for today’s vast fields. Co-founder Michael Norcia can speak to that scale and his company’s methods of meeting it.
These three companies and founders are at the very frontier of what’s possible at the intersection of agriculture and technology, so expect a fruitful conversation.
$150 early-bird savings end on February 14! Book your $275 Early-Bird Ticket today and put that extra money in your pocket.
Students, grab your super-discounted $50 tickets right here. You might just meet your future employer/internship opportunity at this event.
Startups, we only have five demo tables left for the event. Book your $2,200 demo table here and get in front of some of today’s leading names in the biz. Each table comes with four tickets to attend the show.
IT operations collects tons of data across a number of monitoring and logging tools, way too much for any team of humans to keep up with. That’s why there are startups like Loom turning to AI to help sort through it. It can find issues and patterns in the data that would be challenging or impossible for humans to find. Applying AI to operations data in this manner has become known as AIOps in industry parlance.
ServiceNow is first and foremost a company trying to digitize the service process, however that manifests itself. IT service operations is a big part of that. Companies can monitor their systems, wait until a problem happens and then try and track down the cause and fix it, or they can use the power of artificial intelligence to find potential dangers to the system health and neutralize them before they become major problems. That’s what an AIOps product like Loom’s can bring to the table.
Jeff Hausman, vice president and general manager of IT Operations Management at ServiceNow sees Loom’s strengths merging with ServiceNow’s existing tooling to help keep IT systems running. “We will leverage Loom Systems’ log analytics capabilities to help customers analyze data, automate remediation and reduce L1 incidents,” he told TechCrunch.
Loom co-founder and CEO Gabby Menachem not surprisingly sees a similar value proposition. “By joining forces, we have the unique opportunity to bring together our AI innovations and ServiceNow’s AIOps capabilities to help customers prevent and fix IT issues before they become problems,” he said in a statement.
Loom raised $16 million since it launched in 2015, according to PitchBook data. Its most recent round for $10 million was in November 2019. Today’s deal is expected to close by the end of this quarter.
Google’s strategy for bringing new customers to its cloud is to focus on the enterprise and specific verticals like healthcare, energy, financial service and retail, among others. It’s healthcare efforts recently experienced a bit of a setback, with Epic now telling its customers that it is not moving forward with its plans to support Google Cloud, but in return, Google now got to announce two new customers in the travel business: Lufthansa Group, the world’s largest airline group by revenue, and Sabre, a company that provides backend services to airlines, hotels and travel aggregators.
For Sabre, Google Cloud is now the preferred cloud provider. Like a lot of companies in the travel (and especially the airline) industry, Sabre runs plenty of legacy systems and is currently in the process of modernizing its infrastructure. To do so, it has now entered a 10-year strategic partnership with Google “to improve operational agility while developing new services and creating a new marketplace for its airline, hospitality and travel agency customers.” The promise, here, too, is that these new technologies will allow the company to offer new travel tools for its customers.
When you hear about airline systems going down, it’s often Sabre’s fault, so just being able to avoid that would already bring a lot of value to its customers.
“At Google we build tools to help others, so a big part of our mission is helping other companies realize theirs. We’re so glad that Sabre has chosen to work with us to further their mission of building the future of travel,” said Google CEO Sundar Pichai . “Travelers seek convenience, choice and value. Our capabilities in AI and cloud computing will help Sabre deliver more of what consumers want.”
The same holds true for Google’s deal with Lufthansa Group, which includes German flag carrier Lufthansa itself, but also subsidiaries like Austrian, Swiss, Eurowings and Brussels Airlines, as well as a number of technical and logistics companies that provide services to various airlines.
“By combining Google Cloud’s technology with Lufthansa Group’s operational expertise, we are driving the digitization of our operation even further,” said Dr. Detlef Kayser, Member of the Executive Board of the Lufthansa Group. “This will enable us to identify possible flight irregularities even earlier and implement countermeasures at an early stage.”
Lufthansa Group has selected Google as a strategic partner to “optimized its operations performance.” A team from Google will work directly with Lufthansa to bring this project to life. The idea here is to use Google Cloud to build tools that help the company run its operations as smoothly as possible and to provide recommendations when things go awry due to bad weather, airspace congestion or a strike (which seems to happen rather regularly at Lufthansa these days).
Delta recently launched a similar platform to help its employees.
Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.
It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.
To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.
Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.
Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.
The Facebook researchers, led by Dhruv Batra and Eric Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9 percent success rate and few mistakes.
Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.
“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”
The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.
“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one bedroom apartment, it’s much easier to do that than navigate a ten bedroom mansion.”
The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.
This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.
The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done and whatever data they’ve collected gets added to the hoard.
“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijman. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”
In this case you can see that each worker stops at the same time and shares simultaneously.
If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.
“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijman explained.
What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.
In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.
These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9 percent reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.
The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.
“These are real houses that we digitized, so they’re learning things about how western style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”
The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed towards the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.
Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.
Habitat as seen through a variety of virtualized vision systems.
“Before these improvements, Habitat was a static universe,” explained Wijman. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”
Therefore now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions, and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.
The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.
Too often the world of robotics seems to be a solution in search of a problem. Assistive robotics, on the other hand, are among one of the primary real-world tasks existing technology can seemingly address almost immediately.
The concept for the technology has been around for some time now and has caught on particularly well in places like Japan, where human help simply can’t keep up with the needs of an aging population. At TC Sessions: Robotics+AI at U.C. Berkeley on March 3, we’ll be speaking with a pair of founders developing offerings for precisely these needs.
Vivian Chu is the cofounder and CEO of Diligent Robotics. The company has developed the Moxi robot to help assist with chores and other non-patient tasks, in order to allow caregivers more time to interact with patients. Prior to Diligent, Chu worked at both Google[X] and Honda Research Institute.
Mike Dooley is the cofounder and CEO of Labrador Systems. The Los Angeles-based company recently closed a $2 million seed round to develop assistive robots for the home. Dooley has worked at a number of robotics companies including, most recently a stint as the VP of Product and Business Development at iRobot.
Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.
In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.
Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.
It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).
“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”
For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)
Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.
Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.
The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.
Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)
The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.
It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)
Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.
The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.
While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.
In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.
For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.
“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.
The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.
Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.
You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.
But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”.
And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.
What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.
Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)
At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.
But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.
Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.
And a ban would be far harder for platform giants to simply bend to their will.
So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.
— Jonathan Senchyne (@jsench) January 16, 2020
In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.
Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:
Today’s discussion focuses on virtual influencers: fictional characters that build and engage followings of real people over social media. To explore the topic, I spoke with two experienced entrepreneurs:
In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.
Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:
In a series of three interviews, I’m exploring the startup opportunities in both of these spaces in greater depth. First, Michael Dempsey, a partner at VC firm Compound who has blogged extensively about digital characters, avatars and animation, offers his perspective as an investor hunting for startup opportunities within these spaces.