The Catalyst Fund has gained $15 million in new support from JP Morgan and UK Aid and will back 30 fintech startups in Africa, Asia, and Latin America over the next three years.
The Boston based accelerator provides mentorship and non-equity funding to early-stage tech ventures focused on driving financial inclusion in emerging and frontier markets.
That means connecting people who may not have access to basic financial services — like a bank account, credit or lending options — to those products.
Catalyst Fund will choose an annual cohort of 10 fintech startups in five designated countries: Kenya, Nigeria, South Africa, India and Mexico. Those selected will gain grant-funds and go through a six-month accelerator program. The details of that and how to apply are found here.
“We’re offering grants of up to $100,000 to early-stage companies, plus venture building support…and really…putting these companies on a path to product market fit,” Catalyst Fund Director Maelis Carraro told TechCrunch.
Program participants gain exposure to the fund’s investor networks and investor advisory committee, that include Accion and 500 Startups. With the $15 million Catalyst Fund will also make some additions to its network of global partners that support the accelerator program. Names will be forthcoming, but Carraro, was able to disclose that India’s Yes Bank and University of Cambridge are among them.
Catalyst fund has already accelerated 25 startups through its program. Companies, such as African payments venture ChipperCash and SokoWatch — an East African B2B e-commerce startup for informal retailers — have gone on to raise seven-figure rounds and expand to new markets.
Those are kinds of business moves Catalyst Fund aims to spur with its program. The accelerator was founded in 2016, backed by JP Morgan and the Bill & Melinda Gates Foundation.
Catalyst Fund is now supported and managed by Rockefeller Philanthropy Advisors and global tech consulting firm BFA.
African fintech startups have dominated the accelerator’s startups, comprising 56% of the portfolio into 2019.
That trend continued with Catalyst Fund’s most recent cohort, where five of six fintech ventures — Pesakit, Kwara, Cowrywise, Meerkat and Spoon — are African and one, agtech credit startup Farmart, operates in India.
The draw to Africa is because the continent demonstrates some of the greatest need for Catalyst Fund’s financial inclusion mission.
Roughly 66% of Sub-Saharan Africa’s 1 billion people don’t have a bank account, according to World Bank data.
Collectively, these numbers have led to the bulk of Africa’s VC funding going to thousands of fintech startups attempting to scale finance solutions on the continent.
Digital finance in Africa has also caught the attention of notable outside names. Twitter/Square CEO Jack Dorsey recently took an interest in Africa’s cryptocurrency potential and Wall Street giant Goldman Sachs has invested in fintech related startups on the continent.
This lends to the question of JP Morgan’s interests vis-a-vis Catalyst Fund and Africa’s financial sector.
For now, JP Morgan doesn’t have plans to invest directly in Africa startups and is taking a long-view in its support of the accelerator, according to Colleen Briggs — JP Morgan’s Head of Community Innovation
“We find financial health and financial inclusion is a…cornerstone for inclusive growth…For us if you care about a stable economy, you have to start with financial inclusion,” said Briggs, who also oversees the Catalyst Fund.
This take aligns with JP Morgan’s 2019 announcement of a $125 million, philanthropic, five-year global commitment to improve financial health in the U.S. and globally.
More recently, JP Morgan Chase posted some of the strongest financial results on Wall Street, with Q4 profits of $2.9 billion. It’ll be worth following if the company shifts any of its income-generating prowess to business and venture funding activities in Catalyst Fund markets like Nigeria, India and Mexico.
Epsagon, an Israeli startup that wants to help monitor modern development environments like serverless and containers, announced a $16 million Series A today.
U.S. Venture Partners (USVP), a new investor, led the round. Previous investors Lightspeed Venture Partners and StageOne Ventures also participated. Today’s investment brings the total raised to $20 million, according to the company.
CEO and co-founder Nitzan Shapira says that the company has been expanding its product offerings in the last year to cover not just its serverless roots, but also giving deeper insights into a number of forms of modern development.
“So we spoke around May when we launched our platform for microservices in the cloud products, and that includes containers, serverless and really any kind of workload to build microservices apps. Since then we have had a few several significant announcements,” Shapira told TechCrunch.
For starters, the company announced support or tracing and metrics for Kubernetes workloads including native Kubernetes along with managed Kubernetes services like AWS EKS and Google GKE. “A few months ago, we announced our Kubernetes integration. So, if you’re running any Kubernetes workload, you can integrate with Epsagon in one click, and from there you get all the metrics out of the box, then you can set up a tracing in a matter of minutes. So that opens up a very big number of use cases for us,” he said.
The company also announced support for AWS AppSync, a no-code programming tool on the Amazon cloud platform. “We are the only provider today to introduce tracing for AppSync and that’s [an area] where people really struggle with the monitoring and troubleshooting of it,” he said.
The company hopes to use the money from today’s investment to expand the product offering further with support for Microsoft Azure and Google Cloud Platform in the coming year. He also wants to expand the automation of some tasks that have to be manually configured today.
“Our intention is to make the product as automated as possible, so the user will get an amazing experience in a matter of minutes including advanced monitoring, identifying different problems and troubleshooting,” he said
Shapira says the company has around 25 employees today, and plans to double headcount in the next year.
NextNav LLC has raised $120 million in equity and debt to commercially deploy an indoor-positioning system that can pinpoint a device’s location — including what floor it’s on — without GPS .
The company has developed what it calls a Metropolitan Beacon System, which can find the location of devices like smartphones, drones, IoT products or even self-driving vehicles in indoor and urban areas where GPS or other satellite location signals cannot be reliably received. Anyone trying to use their phone to hail an Uber or Lyft in the Loop area of Chicago has likely experienced spotty GPS signals.
The MBS infrastructure is essentially bolted onto cellular towers. The positioning system uses a cellular signal, not line-of-sight signal from satellites like GPS does. The system focuses on determining the “altitude” of a device, CEO and co-founder Ganesh Pattabiraman told TechCrunch.
GPS can provide the horizontal position of a smartphone or IoT device. And wifi and Bluetooth can step in to provide that horizontal positioning indoors. NextNav says its MBS has added a vertical or “Z dimension” to the positioning system. This means the MBS can determine within less than 3 meters the floor level of a device in a multi-story building.
It’s the kind of system that can provide emergency services with critical information such as the number of people located on a particular floor. It’s this specific use-case that NextNav is betting on. Last year, the Federal Communication Commission issued new 911 emergency requirements for wireless carriers that mandates the ability to determine the vertical position of devices to help responders find people in multi-story buildings.
Today, the MBS is in the Bay Area and Washington D.C. The company plans to use this new injection of capital to expand its network to the 50 biggest markets in the U.S., in part to take advantage of the new FCC requirement.
The technology has other applications. For instance, this so-called Z dimension could come in handy for locating drones. Last year, NASA said it will use NextNav’s MBS network as part of its City Environment for Range Testing of Autonomous Integrated Navigation facilities at its Langley Research Center in Hampton, Virginia.
The round was led by funds managed by affiliates of Fortress Investment Group . Existing investors Columbia Capital, Future Fund, Telcom Ventures, funds managed by Goldman Sachs Asset Management, NEA and Oak Investment Partners also participated.
XM Satellite Radio founder Gary Parsons is executive chairman of the Sunnyvale, Calif-based company.
New Orleans declared a state of emergency and shut down its computers after a cyber security event, the latest in a string of city and state governments to be attacked by hackers.
Suspicious activity was spotted around 5 a.m. Friday morning. By 8 a.m., there was an uptick in that activity, which included evidence of phishing attempts and ransomware, Kim LaGrue, the city’s head of IT said in a press conference. Once the city confirmed it was under attack, servers and computers were shut down.
While ransomware was detected there are no requests made to the city of New Orleans at this time, but that is very much a part of our investigation, New Orleans Mayor LaToya Cantrell said during a press conference.
Numerous local and state governments have been plagued by ransomware, a file-encrypting malware that demands money for the decryption key. Pensacola, Florida and Jackson County, Georgia are just a few examples of the near-constant stream of ransomeware attacks over the past year. Louisiana state government was attacked in November, prompting officials to deactivate government websites and other digital services and causing the governor to declare a state of emergency. It was the state’s second declaration related to a ransomware attack in less than six months.
Governments and local authorities are particularly vulnerable as they’re often underfunded and unresourced, and unable to protect their systems from some of the major threats.
New Orleans, it appears was somewhat prepared, which officials said was the result of training and its ability to operate without internet. The investigation is in its early stages, but for now it appears that city employees didn’t interact with or provide credentials or any information to possible attackers, according to officials.
“If there is a positive about being a city that has been touched by disasters and essentially been brought down to zero in the past, is that our plans and activity from a public safety perspective reflect the fact that we can operate with internet, without city networking,” said Collin Arnold, director of Homeland Security, adding that they’ve gone back to pen and paper for now.
Police, fire and EMS are prepared to work outside of the city’s internet network. Emergency communications are not affected by the cybersecurity incident, according to city officials. However, other services such as scheduling building inspections are being handled manually.
New Orleans’s Real-Time Crime Center does work off the city network, however the cameras throughout the city record independently, so right now all of those cameras are still recording regardless of connectivity to the city’s network, Arnold added.
A declaration of a state of emergency has been filed with the Civil District Court in connection with today’s cyber security event. pic.twitter.com/OQXDGv7JS4
— The City Of New Orleans (@CityOfNOLA) December 13, 2019
Federal, state and local officials are now involved in an investigation into the security incident.
Google Cloud today announced Transfer Service, a new service for enterprises that want to move their data from on-premise systems to the count. This new managed service is meant for large-scale transfers on the scale of billions of files and petabytes of data. It complements similar services from Google that allow you to ship data to its data centers via a hardware appliance and FedEx or to automate data transfers from SaaS applications to Google’s BigQuery service.
Transfer Service handles all of the hard work of validating your data’s integrity as it moves to the cloud. The agent automatically handles failures and use as much available bandwidth as it can to reduce transfer times.
To do this, all you have to do is install an agent on your on-premises servers, select the directories you want to copy and let the service do its job. You can then monitor and manage your transfer jobs from the Google Cloud console.
The obvious use case for this is archiving and disaster recovery. But Google is also targeting this at companies that are looking to lift and shift workloads (and their attached data), as well as analytics and machine learning use cases.
As with most of Google Cloud’s recent product launches, the focus here is squarely on enterprise customers. Google wants to make it easier for them to move their workloads to its cloud and for most workloads, that also involves moving lots of data as well.
Hulu today is launching a new kind of ad experience that allows brands to specifically target binge-watchers — that is, viewers who are watching multiple episodes of a favorite program over a long stretch of time. These “binge watch ads” utilize machine learning techniques to predict when a viewer has begun to binge watch a show, then serves up contextually relevant ads that acknowledge a binge is underway. This culminates when the viewer reaches the third episode, at which point they’re informed the next episode is ad-free or presents a personalized offer from the brand partner.
The binge watch ad concept was first announced at Hulu’s annual NewFronts presentation in May, where it introduces its new shows, features and ad formats to advertisers. The company regularly experiments with new advertising formats designed to better cater to a streaming audience in a less obtrusive way. For example, Hulu already offers “pause ads” which only appear when the viewer presses the pause button.
Hulu says it made sense to target binge watchers because binging is now such a common way for people to watch their favorite shows. Today, 75% of U.S. consumers say they binge watch, and on Hulu specifically, nearly 50% of ad-supported viewing hours are spent during binge watch sessions. Hulu defines a “binge” as a viewer watching three or more episodes of a series at a given time.
The debut advertisers to capitalize on the new binge watch ad format include Kellogg’s, Maker’s Mark, and Georgia-Pacific, by way of Hulu’s exclusive launch agency partner, Publicis Media.
Kellogg’s will promote Cheez-It Snap’d snacks during their binge ads, while Georgia Pacific will tout its Sparkle paper towels. Marker’s Mark, of course, will promote its bourbon.
The brands say they were interested in the new format because it gives them a way to reach and reward the consumer during a marathon entertainment session, and because it’s a better fit with how today’s consumers watch TV. Thanks the rise of ad-free subscription video services like Netflix, viewers are less receptive to disruptive advertising that interrupts their viewing. In fact, they can even sour on a brand when its ad plays repeatedly throughout the viewing session.
Offering brands the ability to sponsor an episode, ad-free, instead creates more positive sentiments among viewers.
Hulu’s focus on developing new ad formats that better fit how today’s consumers watch TV may give it an advantage over rivals. Its ad-supported product is now one of many options for streaming TV — and one that goes up against a number of free services, including The Roku Channel, Amazon’s IMDb TV, Sinclair’s Stirr, Viacom’ s Pluto TV, Tubi, YouTube, Vudu’s Movies on Us (Walmart), Plex and others.
The global industry potential of artificial intelligence is well-documented, yet the vision of this AI future is uncertain.
AI and automation trends are generating significant debate among economists and governments, particularly around employment impact and uncertain social outcomes. The mainstream attention is warranted. According to PwC, AI “could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined.”
AI is at a crossroads, and its long-term outlook is still hotly debated. Despite social media giants, automotive companies and numerous other industries investing hundreds of billions of dollars in AI, many automation technologies are not yet directly generating revenue and instead are forecasted to become profitable in the coming decades. This creates additional uncertainty of AI’s true market potential. The realistic potential value of AI is unknown, yet, as the technology advances, the ultimate impact could be of great consequence to virtually every economy.
There are many reasons to view AI’s future from an optimistic lens, however: chatbots provide significant evidence for AI’s positive impact on both business growth and employment markets. Today, chatbots are increasingly capable of mimicking human interactions and conversations to assist business-to-business, business-to-consumer, business-to-government, advertising audiences and other diverse groups. The evolution of the cognitive computer science behind conversational chatbots is perhaps one of the best examples of AI technologies driving revenue. Further, chatbot technology shows some of the greatest promise for augmenting, rather than replacing human workers.
Chatbots are delivering real revenue today for some of the world’s leading financial services (Bank of America), retail (Levi’s), and technology companies (Zendesk) . We’re seeing more consumers taking the next step in a transaction or even making a purchase decision based off conversations with chatbots. Beyond driving sales, chatbots have numerous applications to a wide range of organizations. Nonprofits, NGOs, and even political campaigns find value in deploying chatbots to help handle the influx of inquiries from stakeholders and relevant audiences.
Rather than these chatbots replacing human workers, organizations are finding chatbots to be a helpful and value-creating opportunity that frees employees to focus on more strategic tasks. Apple’s Siri, Amazon Alexa and Microsoft Cortana aren’t replacing executive assistants today, but these technologies are all capable of supporting the executive assistant function in the workplace.
Gartner predicts AI augmentation, defined as a “human-centered partnership model of people and AI working together to enhance cognitive performance,” could generate $2.9 trillion of business value by 2021. Many industries see potential for chatbots to augment functions like sales, customer support and IT, enabling workers to create value in more strategic ways. Bain & Company finds chatbots to be among the most notable examples of artificial intelligence and automation in practice: “Companies use AI applications to understand industry trends, manage their workforce, address problems, power chatbots and personalize content to enable self-service.”
Clearly, the implications of scaled, human-like engagement are stunning in their capacity to carry out tasks. A chatbot’s ability to simultaneously hold tens of thousands of conversations — pulling from many millions of data points — is comparable to what a human customer service rep could accomplish in more than 1,000 years of nonstop work. Scaling customer service via AI allows service professionals to focus on big picture and more complex issues, and it provides rich data on customer interactions. We anticipate seeing more companies look to build better customer service experiences through chatbots, as Google and Salesforce announced in April.
From our research and work with leading global companies, it’s clear that enterprises are finding that chatbots bring about tremendous value while supporting both people employment and long-term business growth opportunities today. Ultimately, chatbots are on track to showcase some of the most optimistic examples of AI augmentation. Consider three examples:
Google Cloud today announced the launch of its new E2 family of compute instances. These new instances, which are meant for general purpose workloads, offer a significant cost benefit, with saving of around 31 percent compared to the current N1 general purpose instances.
The E2 family runs on standard Intel and AMD chips, but as Google notes, they also use a custom CPU scheduler “that dynamically maps virtual CPU and memory to physical CPU and memory to maximize utilization.” In addition, the new system is also smarter about where it places VMs, with the added flexibility to move them to other hosts as necessary. To achieve all of this, Google built a custom CPU scheduler “ with significantly better latency guarantees and co-scheduling behavior than Linux’s default scheduler.” The new scheduler promises sub-microsecond wake-up latencies and faster context switching.
That gives Google efficiency gains that it then passes on to users in the form of these savings. Chances are, we will see similar updates to Google’s other instances families over time.
Its interesting to note that Google is clearly willing to put this offering against that of its competitors. “Unlike comparable options from other cloud providers, E2 VMs can sustain high CPU load without artificial throttling or complicated pricing,” the company writes in today’s announcement. “This performance is the result of years of investment in the Compute Engine virtualization stack and dynamic resource management capabilities.” It’ll be interesting to see some benchmarks that pit the E2 family against similar offerings from AWS and Azure.
As usual, Google offers a set of predefined instance configurations, ranging from 2 vCPUs with 8 GB of memory to 16 vCPUs and 128 GB of memory. For very small workloads, Google Cloud is also launching a set of E2-based instances that are similar to the existing f1-micro and g1-small machine types. These feature 2 vCPUs, 1 to 4 GB of RAM and a baseline CPU performance that ranges from the equivalent of 0.125 vCPUs to 0.5 vCPUs.
BMW today announced that it is finally bringing Android Auto to its vehicles, starting in July 2020. With that, it will join Apple’s CarPlay in the company’s vehicles.
The first live demo of Android Auto in a BMW will happen at CES 2020 next month. After that, it will become available as an update to drivers in 20 countries with cars that feature the BMW OS 7.0. BMW will support Android Auto over a wireless connection, though, which somewhat limits its comparability.
Only two years ago, the company said that it wasn’t interested in supporting Android Auto. At the time, Dieter May, who was then the senior VP for Digital Services and Business Model, explicitly told me that the company wanted to focus on its first-party apps in order to retain full control over the in-car interface and that he wasn’t interested in seeing Android Auto in BMWs. May has since left the company, though it’s also worth noting that Android Auto itself has become significantly more polished over the course of the last two years.
“The Google Assistant on Android Auto makes it easy to get directions, keep in touch and stay productive. Many of our customers have pointed out the importance to them of having Android Auto inside a BMW for using a number of familiar Android smartphone features safely without being distracted from the road, in addition to BMW’s own functions and services,” said Peter Henrich, senior vice president Product Management BMW, in today’s announcement.
With this, BMW will also finally offer support for the Google Assistant after early bets on Alexa, Cortana and the BMW Assistant (which itself is built on top of Microsoft’s AI stack). The company has long said it wants to offer support for all popular digital assistants. For the Google Assistant, the only way to make that work, at least for the time being, is Android Auto.
In BMWs, Android Auto will see integrations into the car’s digital cockpit, in addition to BMW’s Info Display and the heads-up display (for directions). That’s a pretty deep integration, which goes beyond what most car manufacturers feature today.
“We are excited to work with BMW to bring wireless Android Auto to their customers worldwide next year,” said Patrick Brady, vice president of engineering at Google. “The seamless connection from Android smartphones to BMW vehicles allows customers to hit the road faster while maintaining access to all of their favorite apps and services in a safer experience.”
Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.
Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.
The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.
“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”
He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.
“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”
But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).
As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.
“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”
Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.
At its Cloud Next event in London, Google today announced a number of product updates around its managed Anthos platform, as well as Apigee and its Cloud Code tools for building modern applications that can then be deployed to Google Cloud or any Kubernetes cluster.
Anthos is one of the most important recent launches for Google, as it expands the company’s reach outside of Google Cloud and into its customers’ data centers and, increasingly, edge deployments. At today’s event, the company announced that it is taking Anthos Migrate out of beta and into general availability. The overall idea behind Migrate is that it allows enterprises to take their existing, VM-based workloads and convert them into containers. Those machines could come from on-prem environments, AWS, Azure or Google’s Compute Engine, and — once converted — can then run in Anthos GKE, the Kubernetes service that’s part of the platform.
“That really helps customers think about a leapfrog strategy, where they can maintain the existing VMs but benefit from the operational model of Kubernetes,” Google VP of product management Jennifer Lin told me. “So even though you may not get all of the benefits of a cloud-native container day one, what you do get is consistency in the operational paradigm.”
As for Anthos itself, Lin tells me that Google is seeing some good momentum. The company is highlighting a number of customers at today’s event, including Germany’s Kaeser Kompressoren and Turkey’s Denizbank.
Lin noted that a lot of financial institutions are interested in Anthos. “A lot of the need to do data-driven applications, that’s where Kubernetes has really hit that sweet spot because now you have a number of distributed datasets and you need to put a web or mobile front end on [them],” she explained. “You can’t do it as a monolithic app, you really do need to tap into a number of datasets — you need to do real-time analytics and then present it through a web or mobile front end. This really is a sweet spot for us.”
Also new today is the general availability of Cloud Code, Google’s set of extensions for IDEs like Visual Studio Code and IntelliJ that helps developers build, deploy and debug their cloud-native applications more quickly. The idea, here, of course, is to remove friction from building containers and deploying them to Kubernetes.
In addition, Apigee hybrid is now also generally available. This tool makes it easier for developers and operators to manage their APIs across hybrid and multi-cloud environments, a challenge that is becoming increasingly common for enterprises. This makes it easier to deploy Apigee’s API runtimes in hybrid environments and still get the benefits of Apigees monitoring and analytics tools in the cloud. Apigee hybrid, of course, can also be deployed to Anthos.
Before the hyperclouds, there were Linode, Mediatemple, HostGator and seemingly a million other hosting services that let you rent affordable virtual private servers for your development needs. And while we don’t talk about them all that much these days, with maybe the exception of Digital Ocean, which disrupted that market a few years ago thanks to its low prices, these services are still doing quite well and are working to adapt their offerings to today’s developers. Unsurprisingly, that often means adding support for containers, which is exactly what Linode is doing with the beta launch of its Linode Kubernetes Engine (LKE) this week.
Like similar services, 16-year old Linode argues that its offering will help enable more developers to adopt containers, even if they are not experts in managing this kind of infrastructure.
“With the launch of Linode Kubernetes Engine, we’ve democratized Kubernetes for developers, regardless of their resources or expertise,” said Linode CEO and Founder Christopher Aker. “By automating the configuration, node provisioning and management of Kubernetes clusters, we’ve made it faster and easier to ship modern applications. And with realtime autoscaling, free master services, and our intuitive cloud manager interface and open API, developers can bypass the complexities of traditional container management and focus on innovating.”
The service is, of course, integrated with the rest of Linode’s tools, which these days include block and object storage, for example, as well as load balancing, in addition to the usual server options. There’s also support for autoscaling and while advanced users can use tools like Helm charts, Terraform and Rancher, there’s also one-click app support for deploying often-used applications.
Linode’s service is entering a market that already features plenty of other players. But it’s also a growing market with room for lots of different tools that cater to a variety of needs. Tools like Kubernetes now allow companies like Linode to reach beyond their current customer base and offer businesses a platform that allows them to easily develop and test new services on one platform and then put them into production somewhere else — or, of course, put them into production on Lindode, too.
Mubi, a 12-year-old on-demand movie streaming and rental service, has arrived in India. Like other streaming services giants such as Netflix, Amazon Prime Video, Apple TV+ and Disney’s Hotstar, Mubi is offering its service at a slightly lower price in the key overseas entertainment market.
The London-headquartered firm is offering a three-month subscription in India at Rs 199 ($2.8), after which it would charge $7 a month or $67 a year (this way, the monthly cost works out to about $5.5). This is substantially lower than the £9.99 monthly subscription fee it charges to subscribers in the U.K., and the $10.99 it charges in the U.S.
Perhaps the lesser-known streaming service among all the usual names, Mubi has earned a name for itself by offering a selection of critically acclaimed movies. Unlike other services, Mubi’s platter is small. At any moment, the service offers only 30 recent and vintage movies. One new title arrives every day and another vanishes at the same time. No movie stays longer than 30 days on the platform.
Mubi, founded in 2007, started with the ambition of becoming just like what Netflix is today. But it became apparent to the company that they couldn’t afford to offer thousands of titles to users, founder and chief executive of the company Efe Cakarel told The New York Times in an interview two years ago.
“In the beginning, we wanted to be like Netflix, but the unit economies of an ‘all-you-can-eat’ site is very capital-intensive,” Cakarel told the Times. “The question becomes, how do you create a compelling experience? If you can’t get 10,000 titles, how about a limited selection?”
Mubi has amassed 9 million subscribers, the company said. (Cakarel will be speaking at Disrupt Berlin next month.)
In an interview last month, Cakarel said most streaming platforms are today focused on the biggest TV series. “But Mubi focuses on finding gems, often going back decades, that very few people know of. We are giving distribution to such films. You may not like a film, but it is there for a reason,” he said.
In India, Mubi has additionally launched a dedicated channel (first time it has done so for any market), where local movies are being showcased. Customers in India have access to the global feed as well, meaning that they have access to 60 titles in a month, a spokesperson said. Additionally, like in other markets, Mubi is offering a rental service to subscribers in India, allowing them to pick any movie from a selection of a few dozen for $3.5.
For its India business, the company has appointed film producer and Academy Award winner Guneet Monga (known for titles such as Gangs of Wasseypur, The Lunchbox and Masaan) as its content advisor. It also maintains a partnership with Times Bridge, the venture arm of Indian internet services and content conglomerate Times Internet.
“Monga has the sensibility for great cinema. The kind of films she produces, the kind of films she champions are the type of films more people should see. I cannot be more fortunate that she sees our vision in India,” Cakarel said in an interview.
A still from Indian movie “Duvidha”
In a statement, Monga said, “I’m thrilled we have launched a dedicated channel for Indian cinema as it means that film lovers can now watch amazing films like Salaam Bombay and Andaz Apna Apna, alongside globally renowned gems like Moonlight.”
The company has secured deals with local distributors FilmKaravan, NFDC, PVR Pictures, Shemaroo, and Ultra to populate titles on India section every day. Some of the upcoming titles include Kamal Swaroop’s cult film Om Dar-B-Dar, Kanu Behl’s Binnu Ka Sapna, which premiered at Clermont-Ferrand International Short Film Festival this year, and ghost film Duvidha from Indian art-house master Mani Kaul.
Mubi Go, a service available in the U.K. and Ireland, which allows subscribers in those markets to get a movie ticket each week in a local theatre, is not available to customers in India.
NASA has added five companies to the list of vendors that are cleared to bid on contracts for the agency’s Commercial Lunar Payload Services (CLPS) program. This list, which already includes nine companies from a previous selection process, now adds SpaceX, Blue Origin, Ceres Robotics, Sierra Nevada Corporation and Tyvak Nano-Satellite Systems. All of these companies can now place bids on NASA payload delivery to the lunar surface.
This basically means that these companies (which join Astrobotic Technology, Deep Space Systems, Draper Laboratory, Firefly Aerospace, Intuitive Machines, Lockheed Martin Space, Masten Space Systems, Moon Express and OrbitBeyond) can build and fly lunar landers in service of NASA missions. They’ll compete with one another for these contracts, which will involve lunar surface deliveries of resources and supplies to support NASA’s Artemis program missions, the first major goal of which is to return humans to the surface of the Moon by 2024.
These providers are specifically chosen to support delivery of heavier payloads, including “rovers, power sources, science experiments” and more, like the NASA VIPER (Volatiles Investigating Polar Exploration Rover), which is hunting water on the Moon. All of these will be used both to establish a permanent presence on the lunar surface for astronautics to live and work from, as well as key research that needs to be completed to make getting and staying there a viable reality.
NASA has chosen to contract out rides to the Moon instead of running its own as a way to gain cost and speed advantages, and it hopes that these providers will be able to also ferry commercial payloads on the same rides as its own equipment to further defray the overall price tag. The companies will bid on these contracts, worth up to $2.6 billion through November 2028 in total, and NASA will select a vendor for each based on cost, technical feasibility and when they can make it happen.
Blue Origin founder Jeff Bezos announced at this year’s annual International Astronautical Congress that it would be partnering with Draper, as well as Lockheed Martin and Northrop Grumman, for an end-to-end lunar landing system. SpaceX, meanwhile, revealed that it will be targeting a lunar landing of its next spacecraft, the Starship, as early as 2022 in an effort to help set the stage for the 2024-targeted Artemis landing.
The changes to contactual terms will apply globally and to all its commercial customers — whether public or private sector entity, or large or small business, it said today.
The new contractual provisions will be offered to all public sector and enterprise customers at the beginning of 2020, it adds.
In October Europe’s data protection supervisor warned that preliminary results of an investigation into contractual terms for Microsoft’s cloud services had raised serious concerns about compliance with EU data protection rules and the role of the tech giant as a data processor for EU institutions.
Writing on its EU Policy blog, Julie Brill, Microsoft’s corporate VP for global privacy and regulatory affairs and chief privacy officer, announces the update to privacy provisions in the Online Services Terms (OST) of its commercial cloud contracts — saying it’s making the changes as a result of “feedback we’ve heard from our customers”.
“The changes we are making will provide more transparency for our customers over data processing in the Microsoft cloud,” she writes.
She also says the changes reflect those Microsoft developed in consultation with the Dutch Ministry of Justice and Security — which comprised both amended contractual terms and technical safeguards and settings — after the latter carried out risk assessments of Microsoft’s OST earlier this year and also raised concerns.
Specifically, Microsoft is accepting greater data protection responsibilities for additional processing involved in providing enterprise services, such as account management and financial reporting, per Brill:
Through the OST update we are announcing today we will increase our data protection responsibilities for a subset of processing that Microsoft engages in when we provide enterprise services. In the OST update, we will clarify that Microsoft assumes the role of data controller when we process data for specified administrative and operational purposes incident to providing the cloud services covered by this contractual framework, such as Azure, Office 365, Dynamics and Intune. This subset of data processing serves administrative or operational purposes such as account management; financial reporting; combatting cyberattacks on any Microsoft product or service; and complying with our legal obligations.
Microsoft currently designates itself as a data processor, rather than data controller for these administrative and operations functions that can be linked to provision of commercial cloud services, such as its Azure platform.
But under Europe’s General Data Protection framework a data controller has the widest obligations around handling personal data — with responsibility under Article 5 of the GDPR for the lawfulness, fairness and security of the data being processed — and therefore also greater legal risk should it fail to meet the standard.
So, from a regulatory point of view, Microsoft’s current commercial contract structure poses a risk for EU institutions of user data ending up being processed under a lower standard of legal protection than is merited.
The announced switch from data processor to controller should raise the bar around associated purposes that Microsoft may also provide to commercial customers of its cloud services.
For the latter purpose itself, Microsoft says it will remain the data processor, as well as for improving and addressing bugs or other issues related to the service, ensuring security of the services, and keeping the services up to date.
In August a conference organized jointly by the EU’s data protection supervisor and and the Dutch Ministry brought together EU customers of cloud giants to work on a joint response to regulatory risks related to cloud software provision.
Earlier this year the Dutch Ministry obtained contractual changes and technical safeguards and settings in the amended contracts it agreed with Microsoft.
“The only substantive differences in the updated terms [that will roll out globally for all commercial cloud customers] relate to customer-specific changes requested by the Dutch MOJ, which had to be adapted for the broader global customer base,” Brill writes now.
Microsoft’s blog post also points to other global privacy-related changes it says were made following feedback from the Dutch MOJ and others — including a roll out of new privacy tools across major services; specific changes to Office 365 ProPlus; and increased transparency regarding use of diagnostic data.
E-commerce continues to gain momentum — a trend we’ll see played out in the next two months of holiday shopping — and with that comes more consolidation. Today, Elavon, the payments company that is a subsidiary of US Bancorp, announced that it will acquire Sage Pay, one of the bigger payment processors in the UK and Ireland serving small and medium businesses.
Sage Pay’s owner Sage Group said the deal is being done for £232 million in cash (or $300 million at today’s currency rates).
Elavon is active in 10 countries and says it’s the fourth-largest merchant acquirer in Europe, competing against the likes of Global Payments, Vantiv, FIS, Ingenico, Verifone, Stripe, Chase, MasterCard and Visa. The deal is still subject to regulatory approval (both by the Federal Reserve in the US and the Central Bank of Ireland), and if all proceeds, the deal is expected to close in Q2 of 2020.
The acquisition points to a bigger trend underway in e-commerce. The market is very fragmented, not just in terms of the companies who sell goods online but also (and perhaps especially) in terms of the companies that manage the complexities at the back end.
In keeping with that, Sage Pay has a lot of competitors in its specific area of taking and managing the payments process for online retailers and others taking transactions online or via mobile apps. They include some of the same competitors as Elavon’s: newer entrants like Stripe, Adyen, and PayPal (all of which have extensive businesses covering many countries and are each larger than Sage, valued in the billions rather than hundreds of millions of dollars), but also smaller operations like GoCardless as well as more established companies like WorldPay.
This deal is a mark of the consolidation that’s been taking place to gain better economies of scale in a market where individual transactions generally generate incremental revenues.
Sage Pay, in that context, was a relatively small player. It 2018 revenues were £41 million, but it is profitable, with an operating profit of £15 million, and Sage said it expects “to report a statutory profit on disposal of approximately £180 million on completion.”
The deal comes on the heels of Sage Group — which is publicly traded — confirming reports in September that it was looking for strategic alternatives for the payments business. Sage Group for the last couple of years has been divesting payments and banking assets to focus more on accounting, people and payroll software, which it sells through an SaaS model.
“Our vision of becoming a great SaaS company for customers and colleagues alike means we will continue to focus on serving small and medium sized customers with subscription software solutions for Accounting & Financials and People & Payroll,” said Steve Hare, Sage’s CEO, in a statement. “Payments and banking services remain an integral part of Sage’s value proposition and we will deliver them through our growing network of partnerships, including Elavon.”
Elavon, as the consolidator here, was itself acquired by US Bancorp way back in 2001 for $2.1 billion. Currently it is active in 10 countries, but in that same vein of consolidation to improve economies of scale on the technical side, and to aggregate more incremental transactions on the financial side, Elavon’s main objective is to increase its overall share of the e-commerce market in Europe. specifically by expanding with Sage Pay further into the UK and Ireland.
“We are a customer-focused company that is helping businesses succeed in a global marketplace that is changing rapidly,” said Hannah Fitzsimons, president and general manager of Elavon Merchant Services, Europe. “This acquisition brings tremendous talent and leading technology to Elavon, which can be leveraged across the European market.”
Africa-focused fintech startup OPay has raised a $120 million Series B round backed by Chinese investors.
Located in Lagos and founded by consumer internet company Opera, OPay will use the funds to scale in Nigeria and expand its payments product to Kenya, Ghana and South Africa — Opera’s CFO Frode Jacobsen confirmed to TechCrunch.
OPay’s $120 million round comes after the startup raised $50 million in June. It also follows Visa’s $200 million investment in Nigerian fintech company Interswitch and a $40 million raise by Lagos-based payments startup PalmPay — led by China’s Transsion.
There are a couple of quick takeaways. Nigeria has become the epicenter for fintech VC and expansion in Africa. And Chinese investors have made an unmistakable pivot to African tech.
Opera’s activity on the continent represents both trends. The Norway-based, Chinese-owned (majority) company founded OPay in 2018 on the popularity of its internet search engine.
Opera’s web-browser has ranked No. 2 in usage in Africa, after Chrome, the last four years.
The company has built a hefty suite of internet-based commercial products in Nigeria around OPay’s financial utility. These include motorcycle ride-hail app ORide, OFood delivery service and OLeads SME marketing and advertising vertical.
“OPay will facilitate the people in Nigeria, Ghana, South Africa, Kenya and other African countries with the best fintech ecosystem. We see ourselves as a key contributor to…helping local businesses…thrive from…digital business models,” Opera CEO and OPay Chairman Yahui Zhou, said in a statement.
Opera CFO Frode Jacobsen shed additional light on how OPay will deploy the $120 million across Opera’s Africa network. OPay looks to capture volume around bill payments and airtime purchases, but not necessarily as priority. “That’s not something you do every day. We want to focus our services on things that have high-frequency usage,” said Jacobsen.
Those include transportation services, food services and other types of daily activities, he explained. Jacobsen also noted OPay will use the $120 million to enter more countries in Africa than those disclosed.
Since its Series A raise, OPay in Nigeria has scaled to 140,000 active agents and $10 million in daily transaction volume, according to company stats.
Beyond standing out as another huge funding round, OPay’s $120 million VC raise has significance for Africa’s tech ecosystem on multiple levels.
It marks 2019 as the year Chinese investors went all in on the continent’s startup scene. OPay, PalmPay and East African trucking logistics company Lori Systems have raised a combined $240 million from 15 different Chinese actors in a span of months.
OPay’s funding and expansion plans are also a harbinger for fierce, cross-border fintech competition in Africa’s digital finance space. Parallel events to watch for include Interswitch’s imminent IPO, e-commerce venture Jumia’s shift to digital finance and WhatsApp’s likely entry in African payments.
The continent’s 1.2 billion people represent the largest share of the world’s unbanked and underbanked population — which makes fintech Africa’s most promising digital sector. But it’s becoming a notably crowded sector, where startup attrition and failure will certainly come into play.
And not to be overlooked is how OPay’s capital raise moves Opera toward becoming a multi-service commercial internet platform in Africa.
This places OPay and its Opera-supported suite of products on a competitive footing with other ride-hail, food delivery and payments startups across the continent. That means inevitable competition between Opera and Africa’s largest multi-service internet company, Jumia.
Hello and welcome back to TechCrunch’s China Roundup, a digest of recent events shaping the Chinese tech landscape and what they mean to people in the rest of the world. The earnings season is here. This week, long-time archrivals in the Chinese internet battlefield — Alibaba and Tencent — made some big revelations about their future. First off, let’s look at Alibaba’s long-awaited secondary listing and annual shopping bonanza.
It’s that time of year. On November 11, Alibaba announced it generated $38.4 billion worth of gross merchandise value during the annual Single’s Day shopping festival, otherwise known as Double 11. It smashed the record and grabbed local headlines again, but the event means little other than a big publicity win for the company and showcasing the art of drumming up sales.
GMV is often used interchangeably with sales in e-commerce. That’s problematic because the number takes into account all transactions, including refunded items, and it’s by no means reflective of a company’s actual revenue. There are numerous ways to juice the figure, too, as I wrote last year. Presales began days in advance, incentives were doled out to spur last-minute orders and no refunds could be processed until November 12.
Don’t be fooled by the big numbers (yes, $38B GMV is BIG), the major growth times are over for Alibaba’s Singles’ Day
Today it functions as a massive marketing/user-acquisition event with generous subsidies — in other words: loss-making not profitable pic.twitter.com/S4Wzmudgkz
— Jon Russell (@jonrussell) November 12, 2019
Even Jiang Fan, the boss of Alibaba’s e-commerce business and the youngest among Alibaba’s 38 most important decision-makers, downplayed the number: “I never worry about transaction volumes. Numbers don’t matter. What’s most important is making Single’s Day fun and turning it into a real festival.”
Indeed, Alibaba put together another year of what’s equivalent to the Super Bowl halftime show. Taylor Swift and other international big names graced the stage as the evening gala was live-streamed and watched by millions across the globe.
.@taylorswift13 performing at the 11.11 Global Shopping Festival Countdown Gala last night in Shanghai. The gala was produced by Youku, Alibaba’s video streaming platform. For more coverage on 11.11, check out our dedicated #Double11 page: https://t.co/VeupwMr5WT pic.twitter.com/suLvCd4Y3m
Alibaba is going ahead with its secondary listing in Hong Kong on the heels of reports that it could delay the sale due to ongoing political unrest in the city-state. The company is cash-rich, but listing closer to its customers can potentially ease some of the pressure arising from a new era of volatile U.S.-China relationships.
Alibaba is issuing 500 million new shares with an additional over-allotment option of 75 million shares for international underwriters, it said in a company blog. Reports have put the size of its offering between $10 billion and $15 billion, down from the earlier rumored $20 billion.
The giant has long expressed it intends to come home. In 2014, the e-commerce behemoth missed out on Hong Kong because the local exchange didn’t allow dual-class structures, a type of organization common in technology companies that grants different voting rights for different stocks. The giant instead went public in New York and raised the largest initial public offering in history at $25 billion.
“When Alibaba Group went public in 2014, we missed out on Hong Kong with regret. Hong Kong is one of the world’s most important financial centers. Over the last few years, there have been many encouraging reforms in Hong Kong’s capital market. During this time of ongoing change, we continue to believe that the future of Hong Kong remains bright. We hope we can contribute, in our small way, and participate in the future of Hong Kong,” said chairman and chief executive Daniel Zhang in a statement.
Missing out on Alibaba had also been a source of remorse for the Stock Exchange of Hong Kong. Charles Li, chief executive of the HKEX, admitted that losing Alibaba to New York had compelled the bourse to reform. The HKEX has since added dual-class shares and attracted Chinese tech upstarts such as smartphone maker Xiaomi and local services platform Meituan Dianping.
Content and social networks have been the major revenue drivers for Tencent since its early years, but new initiatives are starting to gain ground. In the third quarter ended September 30, Tencent’s “fintech and business services” unit, which includes its payments and cloud services, became the firm’s second-largest sales avenue trailing the long-time cash cow of value-added services, essentially virtual items sold in games and social networks.
Payments, in particular, accounted for much of the quarterly growth thanks to increased daily active consumers and number of transactions per user. That’s good news for the company, which said back in 2016 that financial services would be its new focus (in Chinese) alongside content and social. The need to diversify became more salient in recent times as Tencent faces stricter government controls over the gaming sector and intense rivalry from ByteDance, the new darling of advertisers and owner of TikTok and Douyin.
Tencent also broke out revenue for cloud services for the first time. The unit grew 80% year-on-year to rake in 4.7 billion yuan ($670 million) and received a great push as the company pivoted to serve more industrial players and enterprises. Alibaba’s cloud business still leads the Chinese market by a huge margin, with revenue topping $1.3 billion during the September quarter.
Luckin Coffee, the Chinese startup that began as a Starbucks challenger, is starting to look more like a convenient store chain with delivery capacities as it continues to increase store density (a combination of seated cafes, pickup stands and delivery kitchens) and widen product offerings to include a growing snack selection. Though bottom-line loss continued in the quarter, store-level operating profit swung to $26.1 million from a loss in the prior-year quarter. 30 million customers have purchased from Luckin, marking an increase of 413.4% from 6 million a year ago.
Minecraft is on the brink of 300 million registered users in China, its local publisher Netease announced at an event this week. That’s a lot of players, but not totally unreasonable given the game is free-to-play in the country with in-game purchases, so users can easily own multiple accounts. Outside China, the game has sold over 180 million paid copies, according to gaming analyst Daniel Ahmed from Niko Partners.
Xiaomi founder Lei Jun is returning a huge favor by backing a long-time friend. Xpeng Motors, the Chinese electric vehicle startup financed by Alibaba and Foxconn, has received $400 million in capital from a group of backers who weren’t identified except Xiaomi, which became its strategic investor. The marriage would allow Xpeng cars to tap Xiaomi’s growing ecosystem of smart devices, but the relationship dates further back. Lei was an early investor in UCWeb, a browser company founded by He and acquired by Alibaba in 2014. A day after Xiaomi’s began trading in Hong Kong in mid-2018, He wrote on his WeChat feed that he had bought $100 million worth of Xiaomi shares (in Chinese) in support of his old friend.
One of the bigger trends in enterprise software has been the emergence of startups building tools to make the benefits of artificial intelligence technology more accessible to non-tech companies. Today, one that has built a platform to apply power of machine learning and natural language processing to massive documents of unstructured data has closed a round of funding as it finds strong demand for its approach.
Eigen Technologies, a London-based startup whose machine learning engine helps banks and other businesses that need to extract information and insights from large and complex documents like contracts, is today announcing that it has raised $37 million in funding, a Series B that values the company at around $150 million – $180 million.
Eigen today is working primarily in the financial sector — its offices are smack in the middle of The City, London’s financial center — but the plan is to use the funding to continue expanding the scope of the platform to cover other verticals such as insurance and healthcare, two other big areas that deal in large, wordy documentation that is often inconsistent in how its presented, full of essential fine print, and is typically a strain on an organisation’s resources to be handled correctly, and is often a disaster if it is not.
The focus up to now on banks and other financial businesses has had a lot of traction. It says its customer base now includes 25% of the world’s G-SIB institutions (that is, the world’s biggest banks), along with others who work closely with them like Allen & Overy and Deloitte. Since June 2018 (when it closed its Series A round), Eigen has seen recurring revenues grow sixfold with headcount — mostly data scientists and engineers — double. While Eigen doesn’t disclose specific financials, you can the growth direction that contributed to the company’s valuation.
The basic idea behind Eigen is that it focuses what co-founder and CEO Lewis Liu describes as “small data”. The company has devised a way to “teach” an AI to read a specific kind of document — say, a loan contract — by looking at a couple of examples and training on these. The whole process is relatively easy to do for a non-technical person: you figure out what you want to look for and analyse, find the examples using basic search in two or three documents, and create the template which can then be used across hundreds or thousands of the same kind of documents (in this case, a loan contract).
Eigen’s work is notable for two reasons. First, typically machine learning and training and AI requires hundreds, thousands, tens of thousands of examples to “teach” a system before it can make decisions that you hope will mimic those of a human. Eigen requires a couple of examples (hence the “small data” approach).
Second, an industry like finance has many pieces of sensitive data (either because its personal data, or because it’s proprietary to a company and its business), and so there is an ongoing issue of working with AI companies that want to “anonymise” and ingest that data. Companies simply don’t want to do that. Eigen’s system essentially only works on what a company provides, and that stays with the company.
Eigen was founded in 2014 by Dr. Lewis Z. Liu (CEO) and Jonathan Feuer (a managing partner at CVC Capital technologies who is the company’s chairman), but its earliest origins go back 15 years earlier, when Liu — a first-generation immigrant who grew up in the US — was working as a “data entry monkey” (his words) at a tire manufacturing plant in New Jersey, where he lived, ahead of starting university at Harvard.
A natural computing whizz who found himself building his own games when his parents refused to buy him a games console, he figured out that the many pages of printouts that he was reading and re-entering into a different computing system could be sped up with a computer program linking up the two. “I put myself out of a job,” he joked.
His educational life epitomises the kind of lateral thinking that often produces the most interesting ideas. Liu went on to Harvard to study not computer science, but physics and art. Doing a double major required working on a thesis that merged the two disciplines together, and Liu built “electrodynamic equations that composed graphical structures on the fly” — basically generating art using algorithms — which he then turned into a “Turing test” to see if people could detect pixelated actual work with that of his program. Distil this, and Liu was still thinking about patterns in analog material that could be re-created using math.
Then came years at McKinsey in London (how he arrived on these shores) during the financial crisis where the results of people either intentionally or mistakenly overlooking crucial text-based data produced stark and catastrophic results. “I would say the problem that we eventually started to solve for at Eigen became for tangible,” Liu said.
Then came a physics PhD at Oxford where Liu worked on X-ray lasers that could be used to bring down the complexity and cost of making microchips, cancer treatments and other applications.
While Eigen doesn’t actually use lasers, some of the mathematical equations that Liu came up with for these have also become a part of Eigen’s approach.
“The whole idea [for my PhD] was, ‘how do we make this cheeper and more scalable?'” he said. “We built a new class of X-ray laser apparatus, and we realised the same equations could be used in pattern matching algorithms, specifically around sequential patterns. And out of that, and my existing corporate relationships, that’s how Eigen started.”
Five years on, Eigen has added a lot more into the platform beyond what came from Liu’s original ideas. There are more data scientists and engineers building the engine around the basic idea, and customising it to work with more sectors beyond finance.
There are a number of AI companies building tools for non-technical business end-users, and one of the areas that comes close to what Eigen is doing is robotic process automation, or RPA. Liu notes that while this is an important area, it’s more about reading forms more readily and providing insights to those. The focus of Eigen in more on unstructured data, and the ability to parse it quickly and securely using just a few samples.
Liu points to companies like IBM (with Watson) as general competitors, while startups like Luminance is another taking a similar approach to Eigen by addressing the issue of parsing unstructured data in a specific sector (in its case, currently, the legal profession).
Stephen Nundy, a partner and the CTO of Lakestar, said that he first came into contact with Eigen when he was at Goldman Sachs, where he was a managing director overseeing technology, and the bank engaged it for work.
“To see what these guys can deliver, it’s to be applauded,” he said. “They’re not just picking out names and addresses. We’re talking deep, semantic understanding. Other vendors are trying to be everything to everybody, but Eigen has found market fit in financial services use cases, and it stands up against the competition. You can see when a winner is breaking away from the pack and it’s a great signal for the future.”
Google is the latest big tech company to make a move into banking and personal financial services: The company is gearing up to offer checking accounts to consumers, as first reported by The Wall Street Journal, starting as early as next year. Google is calling the project “Cache,” and it’ll partner with banks and credit unions to offer the checking accounts, with the banks handling all financial and compliance activities related to the accounts.
Google’s Caesar Sengupta spoke to the WSJ about the new initiative, and Sengupta made clear that Google will be seeking to put its financial institution partners much more front-and-center for its customers than other tech companies have perhaps done with their financial products. Apple works with Goldman Sachs on its Apple Card credit product, for instance, but the credit card is definitely pretend primarily as an Apple product.
So why even bother getting into this game if it’s leaving a lot of the actual banking to traditional financial institutions? Well, Google obviously stands to gain a lot of valuable information and insight on customer behavior with access to their checking account, which for many is a good picture of overall day-to-day financial life. Google says it’s also intending to offer product advantages for both consumers and banks, including things like loyalty programs, on top of the basic financial services. It’s also still considering whether or not it’ll charge service fees, per Segupta – not doing so would definitely be and advantage over most existing checking accounts available.
Google already offers Google Pay, and its Google Wallet product has hosted some features beyond simple payments tracking, including the ability to send money between individuals. Meanwhile, rivals including Apple have also introducing payment products, and Apple of course recently expanded into the credit market with Apple Card. Facebook also introduced its own digital payment product earlier this week, and earlier this year announced its intent to build its own digital currency called ‘Libra’ along with partners.
The initial financial partners that Google is working with include Citigroup and Stanford Federal Credit Union, and their motivation per the WSJ piece appears to be seeking out and attracting younger and more digital-savvy customers who are increasingly looking to handle more of their lives through online tools. Per Sengupta’s comments, they’ll also benefit from Google’s ability to work with large sets of data and turn those into value-add products, but the Google exec also said the tech company doesn’t sue Google Pay data for advertising, nor does it share that data with advertisers. Still, convincing people to give Google access to this potentially sensitive area of their lives might be an uphill battle, especially given the current political and social climate around big tech.