Hello and welcome back to TechCrunch’s China Roundup, a digest of recent events shaping the Chinese tech landscape and what they mean to people in the rest of the world. Last week, we looked at how Alibaba and Tencent fared in the last quarter; the talk in Silicon Valley and Beijing this week is on Y Combinator’s sudden retreat from China. We will also discuss the enduring food delivery war in the country later.
The storied Silicon Valley accelerator Y Combinator announced the closure of its China unit just a little over a year after it entered the country. In a vague statement posted on its official blog, the organization said the decision came amid a change in leadership. Sam Altman, its former president who hired legendary artificial intelligence scientist Lu Qi to initiate the China operation, recently left his high-profile role to join research outfit OpenAI. With that, YC has since refocused its energy to support “local and international startups from our headquarters in Silicon Valley.”
What was untold is the insurmountable challenge that multinationals face in their attempt to win in a wildly different market. Lu Qi, who wore management hats at Baidu and Microsoft before joining YC, was clearly aware of the obstacles when he said in an interview (in Chinese) in May that “multinational corporations in China have almost been wiped out. They almost never successfully land in China.” The prescription, he believes, is to build a local team that’s given full autonomy to make decisions around products, operations, and the business.
A former executive at an American company’s China branch, who asked to remain anonymous, argued that Lu Qi’s one-man effort can’t be enough to beat the curse of multinationals’ path in China. “All I can say is: Lu has taken a detour. Going independent is the best decision. When it comes to whether Chinese startups are suited for mentorship, or whether incubators bring value to China, these are separate questions.”
What’s curious is that YC China seemed to have been given a meaningful level of freedom before the split. “Thanks to Sam Altman and the U.S. team, who agreed with my view and supported with much preparation, YC China is not only able to enjoy key resources from YC U.S. but can also operate at a completely independent capacity,” Lu said in the May interview.
Moving on, the old YC China team will join Lu Qi to fund new companies under a newly minted program, MiraclePlus, announced YC China via a Wechat post (in Chinese). The initiative has set up its own fund, team, entity and operational team. The deep ties that Lu has fostered with YC will continue to benefit his new portfolio, which will receive “support” from the YC headquarters, though neither party elaborated on what that means.
The food delivery war in China is still dragging on two years after the major consolidation that left the market with two major players. Meituan, the local services company backed by Tencent, has managed to attain an expanding share against Alibaba-owned Ele.me. According to third-party data (in Chinese) provided by Trustdata, Meituan accounted for 65.1% of China’s overall food delivery orders during the second quarter, steadily rising from just under 60% a year ago. Ele.me, on the other hand, has lost nearly 10% of the market, slumping to 27.4% from 36% a year ago.
In terms of monetization, Meituan generated 15.6 billion yuan ($2.2 billion) in revenue from its food delivery segment in the quarter ended September 30. That dwarfs Ele.me, which racked up 6.8 billion yuan ($970 million) during the same period. Both are growing north of 30% year-over-year.
This may not be all that surprising given Alibaba has arguably more imminent battles to fight. The e-commerce leader has been consumed by the rise of Pinduoduo, which has launched an assault on China’s low-tier cities with its ultra-cheap products and social-driven online shopping experience. Meituan, on the other hand, is fixated on beefing up its main turf of on-demand neighborhood services after divesting its costly bike-sharing endeavor.
When both contestants have the capital to burn through — as they have demonstrated through heavily subsidizing customers and restaurants — the race comes down to which has greater control of user traffic. Meituan holds a competitive edge thanks to its merger with Dianping, a leading restaurant review app akin to Yelp, back in 2015. Dianping today operates as a standalone brand but its food app is deeply integrated with Meituan’s delivery services. For example, hundreds of millions of users are able to place Meituan-powered food delivery orders straight from Dianping.
Alibaba and Meituan used to be on more friendly terms just a few years ago. In 2011, the e-commerce giant participated in Meituan’s $50 million Series B financing. Before long, the two clashed over control of the company. Alibaba is known to impose a heavy hand on its portfolio companies by taking up majority stakes and reshuffling the company with new executives. That’s because Alibaba believes that “only when you operate can you generate synergies and really create exponential value,” said vice chairman Joe Tsai in an interview. “Whereas if you just make a financial investment, you’re counting an internal rate of return. You’re not creating real value.”
Ele.me lived through that transformation. As of September, Alibaba has reportedly (in Chinese) completed replacing Ele.me’s management with its pool of appointed personnel. Ele.me’s founder Zhang Xuhao left the company with billions of yuan in cash and joined a venture capital firm (in Chinese).
Meituan’s founder Wang Xing had more unfettered pursuits. In a later financing round, he refused to accept Alibaba’s condition for portfolio companies to eschew Tencent investments, a strategy of the giant to hobble its archrival. That botched the partnership and Alibaba has since been gradually offloading its Meituan shares but still held onto small amounts, according to Wang in 2017, “to create trouble” for Meituan going forward.
Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.
The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.
There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.
As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.
Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.
The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.
A cross-section look at the CS-1. Photo via Cerebras
Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.
In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.
In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”
I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.
A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.
That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”
A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.
Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”
In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.
According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”
(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).
Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.
It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.
Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.
Over the past few years, gig economy companies and the treatment of their labor force has become a hot button issue for public and private sector debate.
At our recent annual Disrupt event in San Francisco, we dug into how founders, companies and the broader community can play a positive role in the gig economy, with help from Derecka Mehrens, an executive director at Working Partnerships USA and co-founder of Silicon Valley Rising — an advocacy campaign focused on fighting for tech worker rights and creating an inclusive tech economy — and Amanda de Cadenet, founder of Girlgaze, a platform that connects advertisers with a network of 200,000 female-identifying and non-binary creatives.
Derecka and Amanda dove deep into where incumbent gig companies have fallen short, what they’re doing to right the ship, whether VC and hyper-growth mentalities fit into a sustainable gig economy, as well as thoughts on Uber’s new ‘Uber Works’ platform and CA AB-5. The following has been lightly edited for length and clarity.
Arman Tabatabai: What was the original promise and value proposition of the gig economy? What went wrong?
Derecka Mehrens: The gig economy exists in a larger context, which is one in which neoliberalism is failing, trickle-down economics is proven wrong, and every day working people aren’t surviving and are looking for something more.
And so you have a situation in which the system we put together to create employment, to create our communities, to build our housing, to give us jobs is dysfunctional. And within that, folks are going to come up with disruptive solutions to pieces of it with a promise in mind to solve a problem. But without a larger solution, that will end up, in our view, exacerbating existing inequalities.
Shuttle startup Via and the city of Cupertino are launching an on-demand public transportation network, the latest example of municipalities trying out alternatives to traditional buses.
The aim is for these on-demand shuttles, which will start with six vans branded with the city of Cupertino logo, to provide more efficient connections to CalTrain and increase access to public transit across the city.
The on-demand shuttle service, which begins October 29, will eventually grow to 10 vehicles and include a wheelchair accessible vehicle. Avis Budget Group, another partner in this service, is the fleet management service that will maintain the vehicles.
In Cupertino, residents and commuters can use the Via app or a phone reservation system to hail a shuttle. The network will span the entire 11-square-mile city with a satellite zone surrounding the Sunnyvale CalTrain station for commuters, Via said Monday. Cupertino Mayor Steven Scharf views the Via on-demand service as the next generation of “what public transportation can be, allowing us to increase mobility while taking a step toward our larger goal of reducing traffic congestion.”
The service, which will run from 6 a.m. to 8 p.m. weekdays and 9 a.m. to 5 p.m. Saturdays, will cost $5 a ride. Users can buy weekly and monthly passes for $17 and $60, respectively.
Via has two sides to its business. The company operates consumer-facing shuttles in Chicago, Washington, D.C. and New York.
Via also partners with cities and transportation authorities, giving clients access to their platform to deploy their own shuttles. The city of Cupertino, home to Apple, SeaGate Technologies and numerous other software and tech-related companies, is one example of this. Austin’s Capital Metropolitan Transportation Authority also uses the Via platform to power the city’s Pickup service. And Via’s platform is used by Arriva Bus UK, a Deutsche Bahn Company, for a first- and last-mile service connecting commuters to a high-speed train station in Kent, U.K.
In January, Via announced it was partnering with Los Angeles as part of a pilot program that will give people rides to three busy public transit stations. Via claims it now has more than 80 launched and pending deployments in over 20 countries, providing more than 60 million rides to date.
While city leaders appear increasingly open to experimenting with on-demand shuttles, success in this niche business isn’t guaranteed. For instance, Chariot, which was acquired by Ford, shut down its operations in San Francisco, New York and the UK in early 2019.
Brexit has taken over discourse in the UK and beyond. In the UK alone, it is mentioned over 500 million times a day, in 92 million conversations — and for good reason. While the UK has yet to leave the EU, the impact of Brexit has already rippled through industries all over the world. The UK’s technology sector is no exception. While innovation endures in the midst of Brexit, data reveals that innovative companies are losing the ability to attract people from all over the world and are suffering from a substantial talent leak.
It is no secret that the UK was already experiencing a talent shortage, even without the added pressure created by today’s political landscape. Technology is developing rapidly and demand for tech workers continues to outpace supply, creating a fiercely competitive hiring landscape.
The shortage of available tech talent has already created a deficit that could cost the UK £141 billion in GDP growth by 2028, stifling innovation. Now, with Brexit threatening the UK’s cosmopolitan tech landscape — and the economy at large — we may soon see international tech talent moving elsewhere; in fact, 60% of London businesses think they’ll lose access to tech talent once the UK leaves the EU.
So, how can UK-based companies proactively attract and retain top tech talent to prevent a Brexit brain drain? UK businesses must ensure that their hiring funnels are a top priority and focus on understanding what matters most to tech talent beyond salary, so that they don’t lose out to US tech hubs.