Hello and welcome back to TechCrunch’s China roundup, a digest of recent events shaping the Chinese tech landscape and what they mean to people in the rest of the world.
This week, the gaming industry again became a target of Beijing, which imposed arguably the world’s strictest limits on underage players. On the other hand, China’s tech titans are hastily answering Beijing’s call for them to take on more social responsibilities and take a break from unfettered expansion.
China dropped a bombshell on the country’s young gamers. As of September 1, users under the age of 18 are limited to only one hour of online gaming time: on Fridays, Saturdays and Sundays between 8-9 p.m.
The stringent rule adds to already tightening gaming policies for minors, as the government blames video games for causing myopia, as well as deteriorating mental and physical health. Remember China recently announced a suite of restrictions on after-school tutoring? The joke going around is that working parents will have an even harder time keeping their kids occupied.
A few aspects of the new regulation are worth unpacking. For one, the new rule was instituted by the National Press and Publication Administration (NPPA), the regulatory body that approves gaming titles in China and that in 2019 froze the approval process for nine months, which led to plunges in gaming stocks like Tencent.
It’s curious that the directive on playtime came from the NPPA, which reviews gaming content and issues publishing licenses. Like other industries in China, video games are subject to regulations by multiple authorities: NPPA; the Cyberspace Administration of China (CAC), the country’s top internet watchdog; and the Ministry of Industry and Information Technology, which oversees the country’s industrial standards and telecommunications infrastructure.
As analysts long observe, the mighty CAC, which sits under the Central Cyberspace Affairs Commission chaired by President Xi Jinping, has run into “bureaucratic struggles” with other ministries unwilling to relinquish power. This may well be the case for regulating the lucrative gaming industry.
For Tencent and other major gaming companies, the impact of the new rule on their balance sheet may be trifling. Following the news, several listed Chinese gaming firms, including NetEase and 37 Games, hurried to announce that underage players made up less than 1% of their gaming revenues.
Tencent saw the change coming and disclosed in its Q2 earnings that “under-16-year-olds accounted for only 2.6% of its China-based grossing receipts for games and under-12-year-olds accounted for just 0.3%.”
These numbers may not reflect the reality, as minors have long found ways around gaming restrictions, such as using an adult’s ID for user registration (just as the previous generation borrowed IDs from adult friends to sneak into internet cafes). Tencent and other gaming firms have vowed to clamp down on these workarounds, forcing kids to seek even more sophisticated tricks, including using VPNs to access foreign versions of gaming titles. The cat and mouse game continues.
While China curtails the power of its tech behemoths, it has also pressured them to take on more social responsibilities, which include respecting the worker’s rights in the gig economy.
Last week, the Supreme People’s Court of China declared the “996” schedule, working 9 a.m. to 9 p.m. six days a week, illegal. The declaration followed years of worker resistance against the tech industry’s burnout culture, which has manifested in actions like a GitHub project listing companies practicing “996.”
Meanwhile, hardworking and compliant employees have often been cited as a competitive advantage of China’s tech industry. It’s in part why some Silicon Valley companies, especially those run by people familiar with China, often set up branches in the country to tap its pool of tech talent.
The days when overworking is glorified and tolerated seem to be drawing to an end. Both ByteDance and its short video rival Kuaishou recently scrapped their weekend overtime policies.
Similarly, Meituan announced that it will introduce compulsory break time for its food delivery riders. The on-demand services giant has been slammed for “inhumane” algorithms that force riders into brutal hours or dangerous driving.
In groundbreaking moves, ride-hailing giant Didi and Alibaba’s e-commerce rival JD.com have set up unions for their staff, though it’s still unclear what tangible impact the organizations will have on safeguarding employee rights.
Tencent and Alibaba have also acted. On August 17, President Xi Jinping delivered a speech calling for “common prosperity,” which caught widespread attention from the country’s ultra-rich.
“As China marches towards its second centenary goal, the focus of promoting people’s well-being should be put on boosting common prosperity to strengthen the foundation for the Party’s long-term governance.”
This week, both Tencent and Alibaba pledged to invest 100 billion yuan ($15.5 billion) in support of “common prosperity.” The purposes of their funds are similar and align neatly with Beijing’s national development goals, from growing the rural economy to improving the healthcare system.
The FBI has warned that the Chinese government is using both in-person and digital techniques to intimidate, silence and harass U.S.-based Uyghur Muslims.
The Chinese government has long been accused of human rights abuses over its treatment of the Uyghur population and other mostly Muslim ethnic groups in China’s Xinjiang region. More than a million Uyghurs have been detained in internment camps, according to a United Nations human rights committee, and many other Uyghurs have been targeted and hacked by state-backed cyberattacks. China has repeatedly denied the claims.
In recent months, the Chinese government has become increasingly aggressive in its efforts to shut down foreign critics, including those based in the United States and other Western democracies. These efforts have now caught the attention of the FBI.
In an unclassified bulletin, the FBI warned that officials are using transnational repression — a term that refers to foreign government transgression of national borders through physical and digital means to intimidate or silence members of diaspora and exile communities — in an attempt to compel compliance from U.S.-based Uyghurs and other Chinese refugees and dissidents, including Tibetans, Falun Gong members, and Taiwan and Hong Kong activists.
“Threatened consequences for non-compliance routinely include detainment of a U.S.-based person’s family or friends in China, seizure of China-based assets, sustained digital and in-person harassment, Chinese government attempts to force repatriation, computer hacking and digital attacks, and false representation online,” the FBI bulletin warns.
The bulletin was reported by video surveillance news site IPVM.
The FBI highlighted four instances of U.S.-based individuals facing harassment. In one case from June, the Chinese government imprisoned dozens of family members of six U.S.-based Uyghur journalists in retaliation for their continued reporting on China and its repression of Uyghurs for the U.S. government-funded news service Radio Free Asia. The bulletin said that between 2019 and March 2021, Chinese officials used WeChat to call and text a U.S.-based Uyghur to discourage her from publicly discussing Uyghur mistreatment. Members of this person’s family were later detained in Xinjiang detention camps.
“The Chinese government continues to conduct this activity, even as the U.S. government has sanctioned Chinese officials and increased public and diplomatic messaging to counter China’s human rights and democratic abuses in Xinjiang over the past year,” the FBI states. “This transnational repression activity violates US laws and individual rights.
The FBI has urged U.S. law enforcement personnel, as well as members of the public, to report any suspected incidents of Chinese government harassment.
Summer is still technically in session, but a snowball is slowly developing in the world of apps, and specifically the world of in-app payments. A report in Reuters today says that the Competition Commission of India, the country’s monopoly regulator, will soon be looking at an antitrust suit filed against Apple over how it mandates that app developers use Apple’s own in-app payment system — thereby giving Apple a cut of those payments — when publishers charge users for subscriptions and other items in their apps.
The suit, filed by an Indian non-profit called “Together We Fight Society”, said in a statement to Reuters that it was representing consumer and startup interests in its complaint.
The move would be the latest in what has become a string of challenges from national regulators against app store operators — specifically Apple but also others like Google and WeChat — over how they wield their positions to enforce market practices that critics have argued are anti-competitive. Other countries that have in recent weeks reached settlements, passed laws, or are about to introduce laws include Japan, South Korea, Australia, the U.S. and the European Union.
And in India specifically, the regulator is currently working through a similar investigation as it relates to in-app payments in Android apps, which Google mandates use its proprietary payment system. Google and Android dominate the Indian smartphone market, with the operating system active on 98% of the 520 million devices in use in the country as of the end of 2020.
It will be interesting to watch whether more countries wade in as a result of these developments. Ultimately, it could force app store operators, to avoid further and deeper regulatory scrutiny, to adopt new and more flexible universal policies.
In the meantime, we are seeing changes happen on a country-by-country basis.
Just yesterday, Apple reached a settlement in Japan that will let publishers of “reader” apps (those for using or consuming media like books and news, music, files in the cloud and more) to redirect users to external sites to provide alternatives to Apple’s proprietary in-app payment provision. Although it’s not as seamless as paying within the app, redirecting previously was typically not allowed, and in doing so the publishers can avoid Apple’s cut.
South Korean legislators earlier this week approved a measure that will make it illegal for Apple and Google to make a commission by forcing developers to use their proprietary payment systems.
And last week, Apple also made some movements in the U.S. around allowing alternative forms of payments, but relatively speaking the concessions were somewhat indirect: app publishers can refer to alternative, direct payment options in apps now, but not actually offer them. (Not yet at least.)
Some developers and consumers have been arguing for years that Apple’s strict policies should open up more. Apple however has long said in its defense that it mandates certain developer policies to build better overall user experiences, and for reasons of security. But, as app technology has evolved, and consumer habits have changed, critics believe that this position needs to be reconsidered.
One factor in Apple’s defense in India specifically might be the company’s position in the market. Android absolutely dominates India when it comes to smartphones and mobile services, with Apple actually a very small part of the ecosystem.
As of the end of 2020, it accounted for just 2% of the 520 million smartphones in use in the country, according to figures from Counterpoint Research quoted by Reuters. That figure had doubled in the last five years, but it’s a long way from a majority, or even significant minority.
The antitrust filing in India has yet to be filed formally, but Reuters notes that the wording leans on the fact that anti-competitive practices in payments systems make it less viable for many publishers to exist at all, since the economics simply do not add up:
“The existence of the 30% commission means that some app developers will never make it to the market,” Reuters noted from the filing. “This could also result in consumer harm.”
In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.
The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.
But from today it expects the standards of the code to be met.
Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.
Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).
The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.
Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.
The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.
The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.
The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.
“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”
It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”
Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”
“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”
“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.
The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.
The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.
In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.
In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.
A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.
Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.
The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.
And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.
The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).
In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”
“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.
And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).
The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.
Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.
Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.
An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.
But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned).
The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.”
At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.
For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.
That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.
So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.
The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.
The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.
Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.
Complying with the ICO’s design standards may therefore actually be the easy bit.
Hello and welcome back to TechCrunch’s China roundup, a digest of recent events shaping the Chinese tech landscape and what they mean to people in the rest of the world.
The biggest news of the week again comes from Beijing’s ongoing effort to dampen the influence of the country’s tech giants. Regulators are now going after the exploitative use of algorithm-powered user recommendations. We also saw a few major acquisitions this week. Xiaomi is acquiring an autonomous vehicle startup called Deepmotion, and ByteDance is said to be buying virtual reality hardware startup Pico.
Beijing has unveiled the draft of a sweeping regulation to rein in how tech companies operating in China utilize algorithms, the engine of virtually all lucrative tech businesses today from short videos and news aggregation to ride-hailing, food delivery and e-commerce. My colleague Manish Singh wrote an overview of the policy, and here’s a closer look at the 30-point document proposed by China’s top cyberspace watchdog.
Beijing is clearly wary of how purely machine-recommended content can stray away from values propagated by the Communist Party and even lead to the detriment of national interests. In its mind, algorithms should strictly align with the interest of the nation:
Algorithmic recommendations should uphold mainstream values… and should not be used for endangering national security (Point 6).
Regulators want more transparency on companies’ algorithmic black boxes and are making them accountable for the consequences of their programming codes. For example:
Service providers should be responsible for the security of algorithms, create a system for… the review of published information, algorithmic mechanisms, security oversight… enact and publish relevant rules for algorithmic recommendations (Point 7).
Service providers… should not create algorithmic models that entice users into addiction, high-value consumption, or other behavior that disrupts public orders (Point 8).
The government is also clamping down on discriminative algorithms and putting some autonomy back in the hands of consumers:
Service providers… should not use illegal or harmful information as user interests to recommend content or create sexist or biased user tags (Point 10).
Service providers should inform users of the logic, purpose, and mechanisms of the algorithms in use (Point 14).
Service providers… should allow users to turn off algorithmic features (Point 15).
The regulators don’t want internet giants to influence public thinking or opinions. Though not laid out in the document, censorship control will no doubt remain in the hands of the authorities.
Service providers should not… use algorithms to censor information, make excessive recommendations, manipulate rankings or search results that lead to preferential treatment and unfair competition, influence online opinions, or shun regulatory oversight (Point 13).
Like many other aspects of the tech business, certain algorithms are to obtain approval from the government. Tech firms must also hand over their algorithms to the police in case of investigations.
Service providers should file with the government if their recommendation algorithms can affect public opinions or mobilize civilians (Point 20).
Service providers… should keep a record of their recommendation algorithms for at least six months and provide them to law enforcement departments for investigation purposes (Point 23).
If passed, the law will shake up the fundamental business logic of Chinese tech companies that rely on algorithms to make money. Programmers need to pore over these rules and be able to parse their codes for regulators. The proposed law seems to have even gone beyond the scope of the European Union’s data rules, but how the Chinese one will be enforced remains to be seen.
In Xiaomi’s latest earnings call, the smartphone maker said it will acquire DeepMotion, a Beijing-based autonomous driving startup, to aid its autonomous driving endeavor. The deal will cost Xiaomi about $77.3 million, and “a lot of that will be in terms of stock” and “a lot of these payments will be deferred until certain milestones are hit,” said Wang Xiang, Xiaomi president on the call.
Xiaomi’s founder Lei Jun earlier hinted at the firm’s plan to enter the crowded space. On July 28, Lei announced on Weibo, China’s Twitter equivalent, that the company is recruiting 500 autonomous driving experts across China.
Automation has become a selling point for China’s new generation of electric vehicle makers, often with companies conflating advanced driver-assistance systems (ADAS) with Level 4 autonomous driving. Such overstatements in marketing material mislead consumers and make one question the real technical capability of these nascent EV players.
Xiaomi has similarly unveiled plans to manufacture electric cars through a separate car-making subsidiary. The ADAS capabilities brought by DeepMotion are naturally a nice complement to Xiaomi’s future cars. As Wang explained:
We believe that there’s a lot of synergies with [DeepMotion’s ADAS] technology with our EV initiatives. So I think it tells you a couple of points. Number one is, we will roll out EV business. And I said in our prepared remarks, we’ve been very focused on hiring the right team for the EV business at this point in time, formulating our strategy, formulating our product strategy, et cetera, et cetera. But at the same time, we are not afraid to apply it and integrate other teams if we find that those will help us accelerate our plan right.
It’s noteworthy that DeepMotion, founded by Microsoft veterans, specializes in perception technologies and high-precision mapping, which puts it in the vision-driven autonomous driving camp. A number of major Chinese EV makers rely on consumer-grade lidar to automate their cars.
ByteDance is said to be buying Beijing-based VR hardware maker Pico for 5 billion yuan ($770 million), according to Chinese VR news site Vrtuoluo. ByteDance could not be immediately reached for comment.
Advanced VR headsets are often expensive due to the cost of high-end processors. Experts observe that most VR hardware makers are yet to enter the mass consumer market. They are hemorrhaging cash and living off generous venture money and corporate deals.
ByteDance might be buying a money-losing business, but Pico, one of the major VR makers in China, provides a fast track for the TikTok parent to enter VR manufacturing. As the world’s largest short video distributor and an aggressive newcomer to video games, ByteDance has no shortage of creative talent. We will see how it works on producing virtual content if the Pico deal goes through.
This is the second post in a series on the Facebook monopoly. The first post explored how the U.S. Federal Trade Commission should define the Facebook monopoly. I am inspired by Cloudflare’s recent post explaining the impact of Amazon’s monopoly in its industry.
Perhaps it was a competitive tactic, but I genuinely believe it more a patriotic duty: guideposts for legislators and regulators on a complex issue. My generation has watched with a combination of sadness and trepidation as legislators who barely use email question the leading technologists of our time about products that have long pervaded our lives in ways we don’t yet understand.
I, personally, and my company both stand to gain little from this — but as a participant in the latest generation of social media upstarts, and as an American concerned for the future of our democracy, I feel a duty to try.
Mark Zuckerberg has reached his Key Largo moment.
In May 1972, executives of the era’s preeminent technology company — AT&T — met at a secret retreat in Key Largo, Florida. Their company was in crisis.
At the time, Ma Bell’s breathtaking monopoly consisted of a holy trinity: Western Electric (the vast majority of phones and cables used for American telephony), the lucrative long distance service (for both personal and business use) and local telephone service, which the company subsidized in exchange for its monopoly.
Over the next decade, all three government branches — legislators, regulators and the courts — parried with AT&T’s lawyers as the press piled on, battering the company’s reputation in the process. By 1982, a consent decree forced AT&T’s dismantling. The biggest company on earth withered to 30% of its book value and seven independent “Baby Bell” regional operating companies. AT&T’s brand would live on, but the business as the world knew it was dead.
Mark Zuckerberg is, undoubtedly, the greatest technologist of our time. For over 17 years, he has outgunned, outsmarted and outperformed like no software entrepreneur before him. Earlier this month, the U.S. Federal Trade Commission refiled its sweeping antitrust case against Facebook.
Its own holy trinity of Facebook Blue, Instagram and WhatsApp is under attack. All three government branches — legislators, regulators and the courts — are gaining steam in their fight, and the press is piling on, battering the company’s reputation in the process. Facebook, the AT&T of our time, is at the brink. For so long, Zuckerberg has told us all to move fast and break things. It’s time for him to break Facebook.
If Facebook does exist to “make the world more open and connected, and not just to build a company,” as Zuckerberg wrote in the 2012 IPO prospectus, he will spin off Instagram and WhatsApp now so that they have a fighting chance. It would be the ultimate Zuckerbergian chess move. Zuckerberg would lose voting control and thus power over all three entities, but in his action he would successfully scatter the opposition. The rationale is simple:
I write this as an admirer; I genuinely believe much of the criticism Zuckerberg has received is unfair. Facebook faces Sisyphean tasks. The FTC will not let Zuckerberg sneeze without an investigation, and the company has failed to innovate.
Given no chance to acquire new technology and talent, how can Facebook survive over the long term? In 2006, Terry Semel of Yahoo offered $1 billion to buy Facebook. Zuckerberg reportedly remarked, “I just don’t know if I want to work for Terry Semel.” Even if the FTC were to allow it, this generation of founders will not sell to Facebook. Unfair or not, Mark Zuckerberg has become Terry Semel.
It is not a matter of if; it is a matter of when.
In a speech on the floor of Congress in 1890, Senator John Sherman, the founding father of the modern American antitrust movement, famously said, “If we will not endure a king as a political power, we should not endure a king over the production, transportation and sale of any of the necessities of life. If we would not submit to an emperor, we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.”
This is the sentiment driving the building resistance to Facebook’s monopoly, and it shows no sign of abating. Zuckerberg has proudly called Facebook the fifth estate. In the U.S., we only have four estates.
All three branches of the federal government are heating up their pursuit. In the Senate, an unusual bipartisan coalition is emerging, with Senators Amy Klobuchar (D-MN), Mark Warner (D-VA), Elizabeth Warren (D-MA) and Josh Hawley (R-MO) each waging a war from multiple fronts.
In the House, Speaker Nancy Pelosi (D-CA) has called Facebook “part of the problem.” Lina Khan’s FTC is likewise only getting started, with unequivocal support from the White House that feels burned by Facebook’s disingenuous lobbying. The Department of Justice will join, too, aided by state attorneys general. And the courts will continue to turn the wheels of justice, slowly but surely.
In the wake of Facebook co-founder Chris Hughes’ scathing 2019 New York Times op-ed, Zuckerberg said that Facebook’s immense size allows it to spend more on trust and safety than Twitter makes in revenue.
“If what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference,” Zuckerberg said.
This could be true, but it does not prove that the concentration of such power in one man’s hands is consistent with U.S. public policy. And the centralized operations could be rebuilt easily in standalone entities.
Time and time again, whether on Holocaust denial, election propaganda or vaccine misinformation, Zuckerberg has struggled to make quick judgments when presented with the information his trust and safety team uncovers. And even before a decision is made, the structure of the team disincentivizes it from even measuring anything that could harm Facebook’s brand. This is inherently inconsistent with U.S. democracy. The New York Times’ army of reporters will not stop uncovering scandal after scandal, contradicting Zuckerberg’s narrative. The writing is on the wall.
Facebook Blue, Instagram and WhatsApp all face existential threats. Pressure from the government will stifle Facebook’s efforts to right the ship.
For so long, Facebook has dominated the social media industry. But if you ask Chinese technology executives about Facebook today, they quote Tencent founder Pony Ma: “When a giant falls, his corpse will still be warm for a while.”
Facebook’s recent demise begins with its brand. The endless, cascading scandals of the last decade have irreparably harmed its image. Younger users refuse to adopt the flagship Facebook Blue. The company’s internal polling on two key metrics — good for the world (GFW) and cares about users (CAU) — shows Facebook’s reputation is in tatters. Talent is fleeing, too; Instacart alone recently poached 55 Facebook executives.
In 2012 and 2014, Instagram and WhatsApp were real dangers. Facebook extinguished both through acquisition. Yet today they represent the company’s two most promising, underutilized assets. They are the underinvested telephone networks of our time.
Weeks ago, Instagram head Adam Mosseri announced that the company no longer considers itself a photo-sharing app. Instead, its focus is entertainment. In other words, as the media widely reported, Instagram is changing to compete with TikTok.
TikTok’s strength represents an existential threat. U.S. children 4 to 15 already spend over 80 minutes a day on ByteDance’s TikTok, and it’s just getting started. The demographics are quickly expanding way beyond teenagers, as social products always have. For Instagram, it could be too little too late — as a part of Facebook, Instagram cannot acquire the technology and retain the talent it needs to compete with TikTok.
Imagine Instagram acquisitions of Squarespace to bolster its e-commerce offerings, or Etsy to create a meaningful marketplace. As a part of Facebook, Instagram is strategically adrift.
Likewise, a standalone WhatsApp could easily be a $100 billion market cap company. WhatsApp has a proud legacy of robust security offerings, but its brand has been tarnished by associations with Facebook. Discord’s rise represents a substantial threat, and WhatsApp has failed to innovate to account for this generation’s desire for community-driven messaging. Snapchat, too, is in many ways a potential WhatsApp killer; its young users use photography and video as a messaging medium. Facebook’s top augmented reality talents are leaving for Snapchat.
With 2 billion monthly active users, WhatApp could be a privacy-focused alternative to Facebook Blue, and it would logically introduce expanded profiles, photo-sharing capabilities and other features that would strengthen its offerings. Inside Facebook, WhatsApp has suffered from underinvestment as a potential threat to Facebook Blue and Messenger. Shareholders have suffered for it.
Beyond Instagram and WhatsApp, Facebook Blue itself is struggling. Q2’s earnings may have skyrocketed, but the increase in revenue hid a troubling sign: Ads increased by 47%, but inventory increased by just 6%. This means Facebook is struggling to find new places to run its ads. Why? The core social graph of Facebook is too old.
I fondly remember the day Facebook came to my high school; I have thousands of friends on the platform. I do not use Facebook anymore — not for political reasons, but because my friends have left. A decade ago, hundreds of people wished me happy birthday every year. This year it was 24, half of whom are over the age of 50. And I’m 32 years old. Teen girls run the social world, and many of them don’t even have Facebook on their phones.
Zuckerberg’s newfound push into the metaverse has been well covered, but the question remains: Why wouldn’t a Facebook serious about the metaverse acquire Roblox? Of course, the FTC would currently never allow it.
Facebook’s current clunky attempt at a hardware solution, with an emphasis on the workplace, shows little sign of promise. The launch was hardly propitious, as CNN reported, “While Bosworth, the Facebook executive, was in the middle of describing how he sees Workrooms as a more interactive way to gather virtually with coworkers than video chat, his avatar froze midsentence, the pixels of its digital skin turning from flesh-toned to gray. He had been disconnected.”
This is not the indomitable Facebook of yore. This is graying Facebook, freezing midsentence.
Zuckerberg’s control of 58% of Facebook’s voting shares has forestalled a typical Wall Street reckoning: Investors are tiring of Zuckerberg’s unilateral power. Many justifiably believe the company is more valuable as the sum of its parts. The success of AT&T’s breakup is a case in point.
Five years after AT&T’s 1984 breakup, AT&T and the Baby Bells’ value had doubled compared to AT&T’s pre-breakup market capitalization. Pressure from Japanese entrants battered Western Electric’s market share, but greater competition in telephony spurred investment and innovation among the Baby Bells.
AT&T turned its focus to competing with IBM and preparing for the coming information age. A smaller AT&T became more nimble, ready to focus on the future rather than dwell on the past.
Standalone Facebook Blue, Instagram and WhatsApp could drastically change their futures by attracting talent and acquiring new technologies.
Zuckerberg has always been one step ahead. And when he wasn’t, he was famously unprecious: “Copying is faster than innovating.” If he really believes in Facebook’s mission and recognizes that the situation cannot possibly get any better from here, he will copy AT&T’s solution before it is forced upon him.
Regulators are tying Zuckerberg’s hands behind his back as the company weathers body blows and uppercuts from Beijing to Silicon Valley. As Zuckerberg’s idol Augustus Caesar might have once said, carpe diem. It’s time to break Facebook.
Startups have a seemingly intractable problem: a lack of diversity. Despite research showing that diverse founding teams have a higher rate of return than white founding teams, one characteristic of startups remains relatively unchanged: the dearth of BIPOC and women founders, investors, board members, and counsel in the venture capital (VC) ecosystem.
Why should we care? Venture capital has provided early funding for the most innovative and profitable companies of our time — Apple, Amazon, Google (now Alphabet), just to name a few. These companies have changed the way we live, work and play by impacting how we communicate, how we process information, and how we buy goods. With approximately one-quarter of U.S. professionals employed by the high-tech sector — comprising about 5% to 6% of the total workforce, according to the U.S. Equal Employment Opportunity Commission — imagine how much more innovation could happen with more diverse individuals at the table who bring different life experiences and perspectives. And we’re already seeing states enacting laws, and companies changing their practices, to help make this happen in the public company realm.
Many founders of VC-backed startups are white, male, and Ivy League or internationally educated. Women-founded companies receive a fraction of VC investments compared to all-male founded companies. In 2020, women-led startups received only 2.3% of all VC money. As of June 2021, less than 20% of total VC deals went to a startup with at least one female founder.
When looking at BIPOC representation in the VC ecosystem, the numbers are even more abysmal. Three percent of VC investors are Black and 1.7% of VC-backed startups have a Black founder. The number of Latinx founders in VC-backed startups is even lower — 1.3%. Plus, only 2.4% of funding was allocated to Black and Latinx founders from 2015 to August 2020. And, on the startup boards of high tech companies, women hold a mere 8% of the board seats.
But the lack of diversity extends beyond who gets funding or who is in the boardroom; it is also a problem in the executive suite. In California, Asian Americans were among the least likely to be promoted to manager or executive positions, and less than 2% of high-tech executives are Black.
This lack of diversity in the VC ecosystem is a structural problem that has no easy solution. While some VC firms have begun allocating funds for trainings and mentorship programs, additional steps need to be taken.
For example, laws on board diversity have already passed in a few states, but they apply only to public companies and typically focus on gender diversity. The laws generally fall into one of three categories — they mandate, encourage, or require disclosure of board diversity. In 2018, California led the way with SB 826, California’s board gender diversity law, which required public companies headquartered in California (irrespective of where they were incorporated) to have a minimum of one woman on each of their boards by the end of 2019. By the end of this year, the minimum threshold increases to two if the board has five directors and three if it has six or more directors. (In the statute, female is defined as “an individual who self-identifies her gender as a woman, without regard to the individual’s designated sex at birth.”)
The law has already had an impact: between 2018 and March 2021, the number of board seats held by women in such companies increased by a whopping 93.6%, but the law is currently being challenged in the courts.
While legislation regarding gender diversity on public company boards has been passed in certain states, even fewer laws address the issue of the lack of minorities on boards. Only 12.5% of the board members of the 3,000 largest public companies come from underrepresented ethnic and racial groups despite the fact that these groups comprise 40% of the U.S. population. Deloitte and the Alliance for Board Diversity reported data that Fortune 500 board seats were held by individuals identified as African American/Black, Hispanic/Latino(a), and Asian/Pacific Islander at the rates of 8.7%, 4.1%, and 4.6%, respectively, in 2020.
In order to address this underrepresentation, California’s AB 979 requires that a public company headquartered in California has at least one director from an “underrepresented community” by the end of 2021, with the minimum number increasing by the end of 2022. That definition includes someone who self-identifies as Black, African American, Hispanic, Latino, Asian, Pacific Islander, Native American, Native Hawaiian or Alaska Native, or who self-identifies as gay, lesbian, bisexual, or transgender.
In addition to California, Colorado, Illinois, Maryland, New York, Pennsylvania, and Washington have also enacted some type of board diversity measure. Connecticut, Hawaii, Massachusetts, Michigan, New Jersey, Oregon, and Ohio have proposed legislation, too.
Non-governmental initiatives are also being considered. As an example, NASDAQ proposed new listing standards to the SEC requiring disclosure of board diversity. Goldman Sachs announced that it would manage initial public offerings only for companies with at least one diverse board member.
These kinds of laws, however, may be difficult to implement in startups. In order to change the narrative on diversity in startups, change cannot be limited to the board but rather should have a multi-pronged approach focused on diversifying (1) employees in middle and executive management, (2) directors in the boardroom, and (3) the VC firms and other funders.
With startups, board diversity mandates similar to the one passed in California would likely not work in the early stages given the size of these boards. However, creating a culture where diversity is prioritized can manifest itself in other ways.
For example, limited partners who invest in VC funds could contractually obligate their general partners to consider diverse candidates for their firms as well as the board and management of any portfolio companies. VCs can also continue to diversify the limited partners that invest in their funds by eschewing their immediate networks and more actively reaching out to groups historically underrepresented in the startup ecosystem, such as HBCUs. In fact, some VCs are using diversity riders in term sheets to do just that. VCs also need to take a hard look at what type of questions they ask their BIPOC and female founders and consider how they may differ in ways that are detrimental to those historically underrepresented in startups.
We are missing opportunities to foster further innovation by not taking more concrete action to add diversity to the startup ecosystem. There is no magic bullet to address the lack of diversity in the startup ecosystem. However, there are steps that founders, VCs, and limited partners can take to make strides in the right direction.
Space may be the endless frontier, but here on Earth, we define space in the modern sense as something enclosed. Walls, fences and barriers enclose space, define it and make it legible. In fact, the sense of limits is so strong these days with place that we often have to add qualifiers like “open space” to describe wholly natural environments like parks and forests as places without spatial limits.
While enclosures have been with us for centuries, the barriers they raise have never been so high or politically fraught. In the United States, one of the most controversial aspects of the Trump administration was over the erection of a southern border wall with Mexico. With climate change accelerating and migrants increasing all around the world though, walls are becoming a common occurrence and political tool. Just this week, Greece erected fencing along its border with Turkey in preparation for an expected deluge of Afghan refugees fleeing violence in the wake of the Taliban’s seizure of Kabul.
John Lanchester has taken these themes of barriers, fear, and politics and intensified them in his atmospheric novel appropriately titled “The Wall.”
The conceit is simple: a thinly-disguised United Kingdom, ravaged by climate change and heavy migration from outside the island, erects a universal wall across all of its shores, posting sentries every few meters or so to monitor the barriers for any potential intruders. Their sole mission: to keep them out, whoever they might be. Failure is symbolically punished with exile and banishment, with the watchers becoming the watched.
We predominantly follow a pair of sentries who, as the above rule all but implicates for the plot, will become exiled in the course of their duties. What we get then is a meditation on the meaning of home, and also the meaning of barriers and dislocation in a world that is increasingly hostile to being a refuge for much of anyone.
While the plot and characters are a bit lackluster, what is fascinating with the novel is how well it manages to create an environment and ambiance of dread, of a society at the end of its journey. People live, parties are hosted, work is done, but all these activities takes place in a world where the jet stream has presumably disappeared, plunging our hypothetical U.K. into the cold abyss. That theme of gray, morose darkness exudes throughout the book, describing everything from the construction of the wall itself to the personalities of the people that inhabit this world.
That’s the ironic tension that propels the book forward, of global warming heating us up while we simultaneously develop the distant sangfroid to fight the ravaging effects of that heat. We are human, but wooden, divorced from the connection and community we have known in order to protect what little we have left.
That social coolness also inhabits a new set of class differences, not only between native citizens and refugees, but between generations as well. The younger generation, coming to terms with what has happened to their planet, simply no longer follow the instructions of their supposedly wise elders. A mental barrier has been constructed: how can you learn lessons from the people who allowed this to happen? Yet, the boiling anger has long since cooled to an isolated frostiness — acceptance of reality forces the inter-generational conversation to just move on.
Lanchester is astute and subtle in these extensions of the premise, and they are the most enjoyable part of what is — intentionally — a colorless work. The irony again is that this is probably best read on the beach in the middle of summer, an antidote to the heat of our world. I wouldn’t recommend it for the winter months.
There has been more and more “climate fiction” published over the past few years as the issue of climate change has reached prominence in the global consciousness. Many of these are offshoots of science fiction, with long and meandering discussions of technology, policies, and markets and more depending on the work. That can provide intellectual succor in a way and for a certain type of reader.
What Lanchester does is eschew the minutia and technologies pretty much entirely and instead simply situates us in a realistic future — a space that could even be our home. The limits of our imagination are compacted and we are forced to think in tighter quarters. It’s a thought-provoking look at a world whose frontiers are coming closer and closer to all of us all the time.
The Wall by John Lanchester
W. W. Norton, 2019, 288 pages
One of the most unfortunate fault lines in climate change politics today is the lack of cooperation between environmentalists and the national security community. Left-wing climate activists don’t exactly hang out with more right-leaning military strategists, the former often seeing the latter as destructive anti-ecological marauders, while the latter often assume the former are unrealistic pests who would prioritize trees and dolphins over human safety.
Yet, climate change is forcing the two to work ever closer together, as uncomfortable as that might be.
In “All Hell Breaking Loose,” emeritus professor and prolific author Michael T. Klare has written a meta-assessment of the Pentagon’s strategic assessments from the last two decades on how climate will shape America’s security environment. Sober and repetitive but not grim, the book is an eye-opening look at how the defense community is coping with one of the most vexing global challenges today.
Climate change weakens the security environment in practically every domain, and in ways that might not be obvious to the non-defense specialist. For the U.S. Navy, which relies on coastal access to shipyards and ports, rising sea levels threaten to diminish and even occasionally demolish its mission readiness, such as when Atlantic hurricanes hit Virginia, one of the largest centers for naval infrastructure in the United States.
While perhaps obvious, it bears repeating that the U.S. military is as much a landlord as a fighting force, with hundreds of bases spread across the country and around the world. A large percentage of these installations face climate-related challenges that can affect mission readiness, and the cost to harden these facilities is likely to reach tens of billions of dollars — and perhaps even more.
Then there is the question of energy. The Pentagon is understandably one of the greatest users of energy in the world, requiring power for bases, jet fuel for planes, and energy for ships on a global scale. Procurement managers are obviously concerned about costs, but their real concern is availability — they need to have reliable fuel options in even the most chaotic environments. That critical priority is increasingly tenuous with climate change, as transit options for oil can be disrupted by everything from a bad storm to a ship stuck in the Suez Canal.
This is where the Pentagon’s mission and the interests of green-minded activists align heavily, if not perfectly. Klare provides examples of how the Pentagon is investing in areas like biofuels, decentralized grid technology, batteries and more as it looks to secure resiliency for its fighting forces. The Pentagon’s budgetary resources might be scorned by critics, but it’s uniquely positioned to pay the so-called green premiums for more reliable energy in ways that few institutions can realistically afford.
That political alignment continues when it comes to humanitarian response, although for vastly different reasons. One of the Pentagon’s chief concerns with global warming is that it will be increasingly waylaid from its highest priority missions — such as protecting against China, Russia, Iran and other long-time adversaries — into responding to humanitarian crises. As one of the only American institutions with the equipment and logistical know-how capable of deploying thousands of responders to disaster zones, the Pentagon is the go-to source for deployments. For Defense, the difficulty is that the armed forces aren’t trained for humanitarian missions — they’re trained for fighting wars. Attacking ISIS-K and managing a camp of climate refugees are decidedly different skills.
Climate activists are fighting for a more stable and equitable world, one that doesn’t lead to millions of climate refugees fleeing from famine and scorching temperatures. The Pentagon similarly wants to shore up fragile states in the hopes of avoiding deployments outside of its core mission. The two groups speak different languages and have different motivations, but the objectives are much the same.
The most interesting dynamic of climate change and national security is, of course, how the global strategic map changes. Russia is a major winner, and Klare provides an exacting account on how the Pentagon is securing the Arctic now that the ice has melted and shipping lanes have opened at the pole for much of the year and soon to be year round. For the first time, America has run training missions for its armed forces on how to operate in the Arctic and prepare for potential contingencies in the region.
Klare’s book is readable, and its subject is electrifyingly fascinating, but this is not a brilliantly written text by any stretch of the imagination. I dubbed it a meta-assessment because it absolutely reads as if it was written by a team of defense planning specialists in the E Ring. It’s a multi-hundred page think tank paper — and as a reader, you either have the stamina to read that or you don’t.
More caustically, the book’s research and primary citations center on the Pentagon’s assessment reports and Congressional testimony and some secondary reporting in newspapers and elsewhere. There are few to no mentions of direct interviews with the participants here, and that’s a major problem given the extremely political nature of climate change in modern U.S. discourse. Klare certainly observes the politics, but we don’t know what generals and the civilian defense leadership would really say if they didn’t have to sign off publicly on a government report. It’s a massive gulf — and begs the question of how much we really get a true picture of the Pentagon’s thinking with this volume.
Nonetheless, the book is an important contribution, and a reminder that the national security community — while protective of its interests — can also be an important vanguard for change on climate disruption. Activists and wonks should drop the animosity and talk to each other a bit more often, as there are alliances to be made.
All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change by Michael T. Klare
Metropolitan Books, 2019, 304 pages
When it comes to climate change, it might seem that a book entitled “How to Do Nothing” would not only be irrelevant, but also downright obscene and even dangerous. Not to mention that after more than a year of pandemic living, many people are understandably fatigued at the prospect of continuing to keep their lives empty of social activities.
Yet, messing with our notions of action and contemplation is precisely the plan that Jenny Odell has laid out in her lapidary work, a meditation that is, ironically, a call to action.
Odell is a Bay Area star, who has been an artist in residence at a variety of institutions from the Internet Archive to Recology, San Francisco’s trash pickup and processing company. Her artistic work centers on attention, of focusing on the details that envelop us in this world and what we can learn from them. It’s an activity that leads her to birdwatching and long walks in Oakland’s public parks such as the Morcom Rose Garden.
Her book, it might be helpful to note, is subtitled “Resisting the Attention Economy” and Odell has made it her mission to help wean a generation, and well, a population off the spasmodic negativity that emanates from our social media platforms. In fact, she has a more ambitious goal: to wean people off the notion that productivity is the only value to life — that action is the only useful metric by which to measure ourselves. She wants to direct our attention to more important things.
“I fully understand where a life of sustained attention leads. In short, it leads to awareness,” she writes in the introduction. The key word here is sustained — and that’s also the connection with sustainability and the climate more broadly.
We don’t lack for information, data or opinions. In fact, we are overwhelmed with the dross of human thought. Some studies have shown that modern knowledge workers read more words per day than ever before in history — but they’re reading social media posts, emails, Slack messages and other ephemera that are each nibbling and collectively devouring our attention. What’s left is, for many of us, not much of any thought at all. The world is more frenetic and chaotic than ever before, but in the process, we have traded a deeper understanding of ourselves and our place in this world for an incessant deluge of media. Odell wants us to take that imbalance and level it.
For her, that means practicing a more sustained form of attention. That’s a skill most of us have little practice with (a deficit we may not even be aware of, ironically), and indeed, sustaining attention might even mean regularly refusing to engage with the world around us. That’s a good thing in her analysis. “At their loftiest, such refusals can signify the individual capacity for self-directed action against the abiding flow; at the very least, they interrupt the monotony of the everyday.”
Controlling our attention, directing it, and filtering out the noise of contemporary life results not in further atomization and narcissism, but rather a more collective sense of being. “When the pattern of your attention has changed, you render your reality differently. You begin to move and act in a different kind of world,” she writes. Suddenly, the trees and flowers that were once backdrops to our walks to brunch become complex and elegant life in their own right. We deepen our camaraderie with our friends and colleagues in ways that we never could with an emoji in Slack. We build up the potential to work together to solve problems.
Our sustained attention also allows us to notice the details of what is changing around us, the subtle variations of our environment that come from a warming planet. “Things like the American obsession with individualism, customized filter bubbles, and personal branding—anything that insists on atomized, competing individuals striving in parallel, never touching—does the same violence to human society as a dam does to a watershed.” We can’t fix what we don’t see, and with our fragmented attention, we really don’t see much.
The irony of course is that while technology products dissolve attention — building them takes an extraordinary amount of it. While some startup founders strike it rich on a whim and others are injected with product ideas from friends or VCs, the vast majority learned to sustain their attention on a market or customer for sometimes extraordinarily long periods of time in order to notice the gaps in a market. A founder recently told me that he had been working with customers in his market for more than a decade before he eventually understood a need that wasn’t being fulfilled with existing solutions.
What’s missing in the tech and startup community today is connecting that user empathy and focus on product-market fit to the attention we need in all the other aspects of our lives today. Odell analyzes it a bit more negatively than I would: we actually have these skills and in fact, use them quite specifically. We just don’t use them broadly enough to bring our minds to look at our friendships, communities and planet in a deeper light.
Doing nothing allows us to see what matters and what doesn’t. When it comes to solving big problems, particularly some of the most intractable like climate change, it’s precisely doing nothing that allows us to see the right path to doing something.
How to Do Nothing: Resisting the Attention Economy by Jenny Odell
Melville House, 2019, 256 pages
Bill Gates has solved many problems in his (professional) life, and in recent decades, he’s been dedicated to the plight of the world’s poor and particularly their health. Through his foundation work and charitable giving, he’s roamed the world solving problems from malaria and neglected tropical diseases to maternal health, always with an eye toward the novel and typically cheap solution.
It’s that engineering brain and mode of thinking that he brings to bear on climate change in his book “How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need” (yes, it’s italicized on the cover — we really do need them). Gates describes a bit of his evolution from software mogul to global health wizard to concerned climate citizen. If you look at challenges like neglected tropical diseases, for instance, climate change abundantly affects the prevalence of mosquitos and other vectors for infection. No one can avoid climate change when analyzing food security in developing nations.
With this early narrative, Gates is attempting to connect perhaps not with climate change skeptics (it’s hard to connect with them on a good day anyway), but instead to build a bridge to the skeptical-but-ready-to-rethink crowd. He admits that he didn’t think much of the problem until he saw its effects first hand, opening the door to at least some readers who may be ready to undertake a similar intellectual journey.
From there, Gates delivers an extremely sober (one could easily substitute dry) analysis of the major components of greenhouse gas emissions and how we get to net zero by removing 51 billion tons of CO2-equivalent emissions per year, which in chapter order are energy production (27%), manufacturing (31%), agriculture (19%), transportation (16%), and air conditioning (7%).
Gates is an engineer, and it shows and it is marvelous. He places a great emphasis throughout the book on understanding scale, of constantly trying to disentangle the numbers and units we hear about in the press and actually trying to understand whether a particular innovation might make any difference whatsoever. Gates offers the example of an aviation program that will save “17 million tons” of CO2, but points out that the figure is really just 0.03% of global emissions and isn’t necessarily likely to scale up more than it already has. With this framing, he’s borrowing the approach of effective altruism, or the idea that charitable dollars should flow to the projects that can provide the biggest verifiable improvement to quality of life for the least cost.
Unsurprisingly, Gates is a capitalist, and his framework for judging each potential solution is to calculate a “Green Premium” for their use. For instance, a carbon-free cement manufacturing process might cost double the more normal carbon-emitting one. Compare those added costs with the actual savings these substitutions would have on greenhouse gas emissions, and voila: you have an instant guide on the most efficient means to solving climate change.
The answer he comes up with tends to be quite portable in the end. Electrify everything, decarbonize electricity, carbon capture what’s left, and be more efficient. If that sounds hard, that’s because it is, and Gates notes the challenges in an aptly-named chapter entitled “This Will Be Hard” which begins with the line “Please don’t let the title of this chapter depress you.” I’m not sure you needed to buy the book to figure that out.
Gates ends up being an end-to-end conservative figure throughout the book. It’s not just his general approach of protecting the status quo, which is obviously latent in solutions which are essentially substitutable tweaks to our way of life and shouldn’t be surprising given the messenger. It’s also the surprising conservatism of his views on the power of technology to solve these problems. For a person who has quite literally invested billions in clean energy and other green technologies, there is surprisingly little magic that Gates proposes. It’s probably realistic, but considering the source, it can feel like pessimism.
Read in concert with some of the other books in this group of climate change reviews, and one can’t help but feel a sort of calculated naiveté on the part of Gates, a sense that we should just keep playing our cards a little while longer and see if we get a last-minute royal flush. There are early signs of solutions, but most aren’t ready for scale. Some technologies are already available, but would require prodigious outlays to retrofit cars, homes, businesses, and more to actually impact our emissions numbers. Then there’s everyone outside of the West, who deserve access to modern amenities. It’s all so easy, and yet, so out of reach.
The book’s strengths — and simultaneously its weaknesses — is that it is apolitical, fact-laden and ready to be read by all but the most ardent climate change skeptics. But it also acts as a gateway drug of sorts: once you understand the scales of the problem, the scopes of the solutions, and the challenges of Green Premiums and policy implementation, you’re left with the feeling that there is no way we are going to do this in the next few years anyway, so what’s really the point?
Gates ends the book by saying that “We should spend the next decade focusing on the technologies, policies, and market structures that will put us on the path to eliminating greenhouse gases by 2050.” He’s not wrong, but it’s also an evergreen comment, in a world that won’t be evergreen for much longer.
How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need by Bill Gates
Alfred A. Knopf, 2021, 257 pages
Books on climate change, as diverse as the library is, tend to fall into a couple of categories. There are the field guides and observational accounts that chronicle the destruction of our world and make it legible for readers worldwide. There are the policy and tech analyses that splay out options for the future, deliberating tradeoffs and offering guidance to individuals and governments on their decisions. There are the histories that look at missed opportunities, and the geological histories that show what our world was really like over the eons.
Then there’s the much darker category of dystopia.
Dystopic visions of the future are engaging precisely because they are visions. That makes them easy fodder for climate fiction (“cli-fi”) novels or even video games like Final Fantasy VII, a stream of work that has accelerated much in the way that carbon has in the atmosphere. Yet, it’s a field that almost uniquely remains focused on the imagination, of “what if” scenarios and running those contexts to their narrative conclusions.
What makes “How Everything Can Collapse: A Manual for our Times” a rare read is that it is both dystopic and non-fiction.
The book, which was translated into English last year and first published in French in 2015, argues for a hard acceptance of what the authors Pablo Servigne and Raphaël Stevens dub “collapsology.” Less a movement like the Extinction Rebellion and Deep Adaptation communities that have risen up in the Anglophone world, collapsology is centered around a multi-disciplinary and systematic inquiry into the state of our world, our civilization, and our society.
In this, they spurn the American frame of progress and technological advancement to solve challenges, as well as humanity’s innate hopeful desire to see a better world going forward. Instead, they want to understand what’s really happening today, and whether the stresses, shocks, and crises that smash into our consciousness on a regular basis are really just a mirage or a phenomenon far more profound.
It shouldn’t be hard to glean what their answer is. Servigne and Stevens walk through earth systems like energy and food production and more as they scout for tipping points, physical limits, and the other impassable barriers to society’s exponential development. What they find is troubling, of course. Exponential human population growth over more than a century has led to practically insatiable demand for every resource and alimentation that the planet has in stock.
That’s a story many of us are familiar with, but where it gets interesting is when they start to systematically explore what that demand has done for efficiency. Perhaps the most striking example they offered was the history of petroleum and Energy Return on Energy Invested, or the amount of oil and gas it takes to drill for that very resource. ERoEI, they note, has declined from 35:1 in 1990 to a factor today of about 11:1. Fuel is getting harder to find — and that means we use more fuel to drill for less fuel. It’s a negative feedback loop — and worse, an exponential one — and there’s little reason to expect these trends to reverse.
These sorts of negative feedback loops are everywhere in earth systems today if you look closely for them. The permafrost is melting in the Arctic, the Amazon rainforest today emits more carbon dioxide than it absorbs, higher temperatures are making it more difficult and more expensive to grow food. All this at a time when the human population is expected to add several billion more individuals this century.
As with any system, there are lock-ins where components can’t adapt because they are connected to other systems. We can’t replace gas because the entire financial and industrial system is predicated on an abundant and relatively affordable form of energy. We could try to limit the use of automobiles and trucks, but few people (in the West at least) live anywhere near the farms or mining sites where the key sustaining goods of our society come from. Those ears of corn and bags of potatoes are going to have to travel to us, or we to them, but either way, traveling is going to take place.
In the authors’ collective minds, collapsology is about coming to terms with the reality of the systems all around us and just reading the dials. It’s about accepting tipping points, discontinuities, and other non-linear paths in these systems and projecting what they mean for our own lives and those of others. It’s a call to reality, rather than a call to arms. Just look, the authors practically say.
While the first half of the book is mostly centered around exploring systems and how they inter-connect, the second half of the work explores us as humans, and debating collapsology as a phenomenon. Is it too negative? Why do we have psychological barriers that prevent us from seeing the fragility of our ecosystems and our planet? How will art and movies and books adapt to our new context? How are we going to respond to the challenges that are about to confront us in a much more visceral way? The answers — when they are available — are interesting if not always novel.
It’s fascinating to see a bit of a cultural counterpoint to the American sensibility. In some ways, collapsology is just the latest manifestation of French existentialism, updated for the twenty-first century. The book doesn’t provide solutions, nor does it necessarily argue that progress must happen or even that it could happen. Instead, it just observes the human condition, and the conditions around humans. Humans are diverse, and they are going to react to the cataclysm in the diverse ways one would come to expect.
The book offers no solution and paints a dreary future that’s just on the cusp of dystopia. But the title is fascinating, since it posits a conditional rather than an assurance. The world “can collapse” not that it “will collapse.” A reader will be forgiven for thinking the latter is the case the book is making, but ultimately, Servigne and Stevens believe that the only way to avoid collapse is to fully see the world in all its complexity. Collapsology then is really anti-collapsology, or deeply understanding the brittleness of our systems before the limits are reached. That’s a refreshingly intellectual point of view, if not necessary a salve to the fears we read and see and feel every day.
How Everything Can Collapse: A Manual for our Times by Pablo Servigne and Raphaël Stevens. Translated from French by Andrew Brown.
Wiley, 2020, 250 pages
Originally published as “Comment tout peut s’effondrer: Petit Manuel de collapsologie à l’usage des générations présentes”
The climate is finally hitting a climax. Decades of discussions and reports by scientists have yielded pathbreaking works by writers like Elizabeth Kolbert, and today, climate fiction and non-fiction are even becoming global bestselling works. Everyone wants to read about collapse, dystopia, the aftermath — it’s in the very air we breathe after all, what with the IPCC just reporting once again that all numbers point hotter, redder, and closer to us than ever.
The shelves of climate change books extend ever farther, and yet, one can’t help but feel that not much is changing about such a dynamic topic. There are always more details to unravel of course: another species that’s meeting the end of its precarious existence, a river that no longer flows, a town losing its last sparks of civilization. Yet, we know the tropes and the typical plots at this point (or just deny any of it is happening so it doesn’t matter anyway). The most challenging problem on the Earth today is, frankly, getting a bit repetitive.
The upshot is that there are still original works, works that push the edges of our understanding, reformulate some of the old tropes, and can deliver a forceful punch that unmoors our thinking and forces us to confront the familiar destruction with a new empathy.
I wanted to find the most intriguing books for engineers and technologists who are interested in more systematic ways for understanding what is happening to our planet. Not so much on point solutions (although we have one book on that), but rather books that can develop our thinking about how to understand the changes that are by now inevitably coming.
And so, I picked out and reviewed six books that I think represent a strong canon by which to develop our intuitions about climate change, not just as an environmental problem, but as an economic, social, and personal one as well. They range from systems-thinking analyses and prototypical non-fiction to personal reflections and an atmospheric novel. Each in its own way can help us come to terms with what will be the most challenging collective mission in our lives.
Call it beach reading, while that beach is still there.
In recent years, the private sector has been spurning proprietary software in favor of open source software and development approaches. For good reason: The open source avenue saves money and development time by using freely available components instead of writing new code, enables new applications to be deployed quickly and eliminates vendor lock-in.
The federal government has been slower to embrace open source, however. Efforts to change are complicated by the fact that many agencies employ large legacy IT infrastructure and systems to serve millions of people and are responsible for a plethora of sensitive data. Washington spends tens of billions every year on IT, but with each agency essentially acting as its own enterprise, decision-making is far more decentralized than it would be at, say, a large bank.
While the government has made a number of moves in a more open direction in recent years, the story of open source in federal IT has often seemed more about potential than reality.
But there are several indications that this is changing and that the government is reaching its own open source adoption tipping point. The costs of producing modern applications to serve increasingly digital-savvy citizens keep rising, and agencies are budget constrained to find ways to improve service while saving taxpayer dollars.
Sheer economics dictate an increased role for open source, as do a variety of other benefits. Because its source code is publicly available, open source software encourages continuous review by others outside the initial development team to promote increased software reliability and security, and code can be easily shared for reuse by other agencies.
Here are five signs I see that the U.S. government is increasingly rallying around open source.
Two initiatives have gone a long way toward helping agencies advance their open source journeys.
18F, a team within the General Services Administration that acts as consultancy to help other agencies build digital services, is an ardent open source backer. Its work has included developing a new application for accessing Federal Election Commission data, as well as software that has allowed the GSA to improve its contractor hiring process.
18F — short for GSA headquarters’ address of 1800 F St. — reflects the same grassroots ethos that helped spur open source’s emergence and momentum in the private sector. “The code we create belongs to the public as a part of the public domain,” the group says on its website.
Five years ago this August, the Obama administration introduced a new Federal Source Code Policy that called on every agency to adopt an open source approach, create a source code inventory, and publish at least 20% of written code as open source. The administration also launched Code.gov, giving agencies a place to locate open source solutions that other departments are already using.
The results have been mixed, however. Most agencies are now consistent with the federal policy’s goal, though many still have work to do in implementation, according to Code.gov’s tracker. And a report by a Code.gov staffer found that some agencies were embracing open source more than others.
Still, Code.gov says the growth of open source in the federal government has gone farther than initially estimated.
The American Rescue Plan, a $1.9 trillion pandemic relief bill that President Biden signed in early March 2021, contained $9 billion for the GSA’s Technology Modernization Fund, which finances new federal technology projects. In January, the White House said upgrading federal IT infrastructure and addressing recent breaches such as the SolarWinds hack was “an urgent national security issue that cannot wait.”
It’s fair to assume open source software will form the foundation of many of these efforts, because White House technology director David Recordon is a long-time open source advocate and once led Facebook’s open source projects.
Federal IT employees who spent much of their careers working on legacy systems are starting to retire, and their successors are younger people who came of age in an open source world and are comfortable with it.
About 81% of private sector hiring managers surveyed by the Linux Foundation said hiring open source talent is a priority and that they’re more likely than ever to seek out professionals with certifications. You can be sure the public sector is increasingly mirroring this trend as it recognizes a need for talent to support open source’s growing foothold.
By partnering with the right commercial open source vendor, agencies can drive down infrastructure costs and more efficiently manage their applications. For example, vendors have made great strides in addressing security requirements laid out by policies such as the Federal Security Security Modernization Act (FISMA), Federal Information Processing Standards (FIPS) and the Federal Risk and Authorization Management Program (FedRamp), making it easy to deal with compliance.
In addition, some vendors offer powerful infrastructure automation tools and generous support packages, so federal agencies don’t have to go it alone as they accelerate their open source strategies. Linux distributions like Ubuntu provide a consistent developer experience from laptop/workstation to the cloud, and at the edge, for public clouds, containers, and physical and virtual infrastructure.
This makes application development a well-supported activity that includes 24/7 phone and web support, which provides access to world-class enterprise support teams through web portals, knowledge bases or via phone.
Whether it’s accommodating more employees working from home or meeting higher citizen demand for online services, COVID-19 has forced large swaths of the federal government to up their digital game. Open source allows legacy applications to be moved to the cloud, new applications to be developed more quickly, and IT infrastructures to adapt to rapidly changing demands.
As these signs show, the federal government continues to move rapidly from talk to action in adopting open source.
Who wins? Everyone!
China is not done with curbing the influence local internet services have assumed in the world’s largest populous market. Following a widening series of regulatory crackdowns in recent months, the nation on Friday issued draft guidelines on regulating the algorithms firms run to make recommendations to users.
In a 30-point draft guidelines published on Friday, the Cyberspace Administration of China (CAC) proposed forbidding companies from deploying algorithms that “encourage addiction or high consumption” and endanger national security or disrupt the public order.
The services must abide by business ethics and principles of fairness and their algorithms must not be used to create fake user accounts or create other false impressions, said the guidelines from the internet watchdog, which reports to a central leadership group chaired by President Xi Jinping. The watchdog said it will be taking public feedback on the new guidelines for a month (until September 26).
The guidelines also propose that users should be provided with the ability to easily turn off algorithm recommendations. Algorithm providers who have the power to influence public opinion or mobilize the citizens need to get an approval from the CAC.
Friday’s proposal comes at a time when Beijing is increasingly targeting companies for the way they have handled consumer data and the monopolistic positions they have assumed in the nation.
Earlier this year, Beijing-backed China Consumers Association said local internet companies had been “bullying” users into purchases and promotions and undermining their privacy rights.
Beijing’s recent data-security crackdown and tightening regulations around tutor services have spooked investors and wiped hundreds of billions of dollars.
Friday’s guidelines appear to target ByteDance, Alibaba Group, Tencent, and Didi and other companies whose services are built on top of proprietary algorithms. Shares of Alibaba and Tencent fell slightly on the news.
In recent years, several governments including those in the U.S. and India have attempted — to little to no success — to get better clarity on how these big tech companies’ algorithms work and put checks in place to prevent misuse.
The May 2021 executive order from the White House on improving U.S. cybersecurity includes a provision for a software bill of materials (SBOM), a formal record containing the details and supply chain relationships of various components used in building a software product.
An SBOM is the full list of every item that’s needed to build an application. It enumerates all parts, including open-source software (OSS) dependencies (direct), transitive OSS dependencies (indirect), open-source packages, vendor agents, vendor application programming interfaces (APIs) and vendor software development kits.
Software developers and vendors often create products by assembling existing open-source and commercial software components, the executive order notes. It’s useful to those who develop or manufacture software, those who select or purchase software and those who operate the software.
As the executive order describes, an SBOM enables software developers to make sure open-source and third-party components are up to date. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product. And those who operate software can use SBOMs to quickly determine whether they are at potential risk of a newly discovered vulnerability.
“A widely used, machine-readable SBOM format allows for greater benefits through automation and tool integration,” the executive order says. “The SBOMs gain greater value when collectively stored in a repository that can be easily queried by other applications and systems. Understanding the supply chain of software, obtaining an SBOM and using it to analyze known vulnerabilities are crucial in managing risk.”
An SBOM is intrinsically hierarchical. The finished product sits at the top, and the hierarchy includes all of its dependencies providing a foundation for its functionality. Any one of these parts can be exploited in this hierarchical structure, leading to a ripple effect.
Not surprisingly, given the potential impact, there has been a lot of talk about the proposed SBOM provision since the executive order was announced. This is certainly true within the cybersecurity community. Anytime there are attacks such as the ones against Equifax or Solarwinds that involve software vulnerabilities being exploited, there is renewed interest in this type of concept.
Clearly, the intention of an SBOM is good. If software vendors are not upgrading dependencies to eliminate security vulnerabilities, the thinking is we need to be able to ask the vendors to share their lists of dependencies. That way, the fear of customer or public ridicule might encourage the software producers to do a better job of upgrading dependencies.
However, this is an old and outmoded way of thinking. Modern applications and microservices use many dependencies. It’s not uncommon for a small application to use tens of dependencies, which in turn might use other dependencies. Soon the list of dependencies used by a single application can run into the hundreds. And if a modern application consists of a few hundred microservices, which is not uncommon, the list of dependencies can run into the thousands.
If a software vendor were to publish such an extensive list, how will the end users of that software really benefit? Yes, we can also ask the software vendor to publish which of the dependencies is vulnerable, and let’s say that list runs into the hundreds. Now what?
Clearly, having to upgrade hundreds of vulnerable dependencies is not a trivial task. A software vendor would be constantly deciding between adding new functionality that generates revenue and allows the company to stay ahead of its competitors versus upgrading dependencies that don’t do either.
If the government formalizes an SBOM mandate and starts to financially penalize vendors that have vulnerable dependencies, it is clear that given the complexity associated with upgrading dependencies the software vendors might choose to pay fines rather than risk losing revenue or competitive advantage in the market.
Revenue drives market capitalization, which in turn drives executive and employee compensation. Fines, as small as they are, have negligible impact on the bottom line. In a purely economic sense, the choice is fairly obvious.
In addition, software vendors typically do not want to publish lists of all their dependencies because that provides a lot of information to hackers and other bad actors as well as to competitors. It’s bad enough that cybercriminals are able to find vulnerabilities on their own. Providing lists of dependencies gives them even more possible resources to discover weaknesses.
Customers and users of the software, for their part, don’t want to know all the dependencies. What would they gain from studying a list of hundreds of dependencies? Rather, software vendors and their customers want to know which dependencies, if any, make the application vulnerable. That really is the key question.
Prioritizing software composition analysis (SCA) ensures that when dependencies are analyzed in the context of an application, the dependencies that make an application vulnerable can be dramatically reduced.
Instead of publishing a list of 1,000 dependencies, or 100 that are vulnerable, organizations can publish a far more manageable list in the single digits. That is a problem that organizations can much more easily deal with. Sometimes a software vendor can fix an issue without having to upgrade the dependency. For example, it can make changes in the code, which is not always possible if we are merely looking for the list of vulnerable dependencies.
There is no reason to disdain the concept of SBOM outright. By all means, let’s make the software vendors responsible for being transparent about what goes into their software products. Plenty of organizations have paid a steep price because of software vulnerabilities that could have been prevented in the form of data breaches and other cybersecurity attacks.
Indeed, it’s heartening to see the federal government take cybersecurity so seriously and propose ways to enhance the protection of applications and data.
However, let’s make SBOM specific to the list of dependencies that actually make the application vulnerable. This serves both the vendor and its customers by cutting directly to the sources of vulnerabilities that can do damage. That way, we can address the issues at hand without creating unnecessary burdens.
Google is infamous for spinning up products and killing them off, often in very short order. It’s an annoying enough habit when it’s stuff like messaging apps and games. But the tech giant’s ambitions stretch into many domains that touch human lives these days. Including, most directly, healthcare. And — it turns out — so does Google’s tendency to kill off products that its PR has previously touted as ‘life saving’.
To wit: Following a recent reconfiguration of Google’s health efforts — reported earlier by Business Insider — the tech giant confirmed to TechCrunch that it is decommissioning its clinician support app, Streams.
The app, which Google Health PR bills as a “mobile medical device”, was developed back in 2015 by DeepMind, an AI division of Google — and has been used by the UK’s National Health Service in the years since, with a number of Trusts inking deals with DeepMind Health to roll out Streams to their clinicians.
At the time of writing, one NHS Trust — London’s Royal Free — is still using the app in its hospitals.
But, presumably, not for too much longer since Google is in the process of taking Streams out back to be shot and tossed into its deadpool — alongside the likes of its ill-fated social network, Google+, and Internet ballon company Loon, to name just two of a frankly endless list of now defunct Alphabet/Google products.
Other NHS Trusts we contacted which had previously rolled out Streams told us they have already stopped using the app.
University College London NHS Trust confirmed to TechCrunch that it severed ties with Google Health earlier this year.
“Our agreement with Google Health (initially DeepMind) came to an end in March 2021 as originally planned. Google Health deleted all the data it held at the end of the [Streams] project,” a UCL NHS Trust spokesperson told TechCrunch.
Imperial College Healthcare NHS Trust also told us it stopped using Streams this summer (in July) — and said patient data is in the process of being deleted.
“Following the decommissioning of Streams at the Trust earlier this summer, data that has been processed by Google Health to provide the service to the Trust will be deleted and the agreement has been terminated,” a spokesperson said.
“As per the data sharing agreement, any patient data that has been processed by Google Health to provide the service will be deleted. The deletion process is started once the agreement has been terminated,” they added, saying the contractual timeframe for Google deleting patient data is six months.
Another Trust, Taunton & Somerset, also confirmed its involvement with Streams had already ended.
The Streams deals DeepMind inked with NHS Trusts were for five years so these contracts were likely approaching the end of their terms, anyway.
Contract extensions would have had to be agreed by both parties. And Google’s decision to decommission Streams may be factoring in a lack of enthusiasm from involved Trusts to continue using the software — although if that’s the case it may, in turn, be a reflection of Trusts’ perceptions of Google’s weak commitment to the project.
Neither side is saying much publicly.
But as far as we’re aware the Royal Free is the only NHS Trust still using the clinician support app as Google prepares to cut off Stream’s life support.
The Streams story has plenty of wrinkles, to put it politely.
For one thing, despite being developed by Google’s AI division — and despite DeepMind founder Mustafa Suleyman saying the goal for the project was to find ways to integrate AI into Streams so the app could generate predictive healthcare alerts — it doesn’t involve any artificial intelligence.
An algorithm in Streams alerts doctors to the risk of a patient developing acute kidney injury but relies on an existing AKI (acute kidney injury) algorithm developed by the NHS. So Streams essentially digitized and mobilized existing practice.
As a result, it always looked odd that an AI division of an adtech giant would be so interested in building, provisioning and supporting clinician support software over the long term. But then — as it panned out — neither DeepMind nor Google were in it for the long haul at the patient’s bedside.
DeepMind and the NHS Trust it worked with to develop Streams (the aforementioned Royal Free) started out with wider ambitions for their partnership — as detailed in an early 2016 memo we reported on, which set out a five year plan to bring AI to healthcare. Plus, as we noted above, Suleyman keep up the push for years — writing later in 2019 that: “Streams doesn’t use artificial intelligence at the moment, but the team now intends to find ways to safely integrate predictive AI models into Streams in order to provide clinicians with intelligent insights into patient deterioration.”
A key misstep for the project emerged in 2017 — through press reporting of a data scandal, as details of the full scope of the Royal Free-DeepMind data-sharing partnership were published by New Scientist (which used a freedom of information request to obtain contracts the pair had not made public).
The UK’s data protection watchdog went on to find that the Royal Free had not had a valid legal basis when it passed information on millions of patients’ to DeepMind during the development phase of Streams.
Which perhaps explains DeepMind’s eventually cooling ardour for a project it had initially thought — with the help of a willing NHS partner — would provide it with free and easy access to a rich supply of patient data for it to train up healthcare AIs which it would then be, seemingly, perfectly positioned to sell back into the self same service in future years. Price tbc.
No one involved in that thought had properly studied the detail of UK healthcare data regulation, clearly.
Or — most importantly — bothered to considered fundamental patient expectations about their private information.
So it was not actually surprising when, in 2018, DeepMind announced that it was stepping away from Streams — handing the app (and all its data) to Google Health — Google’s internal health-focused division — which went on to complete its takeover of DeepMind Health in 2019. (Although it was still shocking, as we opined at the time.)
It was Google Health that Suleyman suggested would be carrying forward the work to bake AI into Streams, writing at the time of the takeover that: “The combined experience, infrastructure and expertise of DeepMind Health teams alongside Google’s will help us continue to develop mobile tools that can support more clinicians, address critical patient safety issues and could, we hope, save thousands of lives globally.”
A particular irony attached to the Google Health takeover bit of the Streams saga is the fact that DeepMind had, when under fire over its intentions toward patient data, claimed people’s medical information would never be touched by its adtech parent.
Until of course it went on it hand the whole project off to Google — and then lauded the transfer as great news for clinicians and patients!
Google’s takeover of Streams meant NHS Trusts that wanted to continue using the app had to ink new contracts directly with Google Health. And all those who had rolled out the app did so. It’s not like they had much choice if they did want to continue.
Again, jump forward a couple of years and it’s Google Health now suddenly facing a major reorg — with Streams in the frame for the chop as part of Google’s perpetually reconfiguring project priorities.
It is quite the ignominious ending to an already infamous project.
DeepMind’s involvement with the NHS had previously been seized upon by the UK government — with former health secretary, Matt Hancock, trumpeting an AI research partnership between the company and Moorfield’s Eye Hospital as an exemplar of the kind of data-driven innovation he suggested would transform healthcare service provision in the UK.
Luckily for Hancock he didn’t pick Streams as his example of great “healthtech” innovation. (Moorfields confirmed to us that its research-focused partnership with Google Health is continuing.)
The hard lesson here appears to be don’t bet the nation’s health on an adtech giant that plays fast and loose with people’s data and doesn’t think twice about pulling the plug on digital medical devices as internal politics dictate another chair-shuffling reorg.
Patient data privacy advocacy group, MedConfidential — a key force in warning over the scope of the Royal Free’s DeepMind data-sharing deal — urged Google to ditch the spin and come clean about the Streams cock-up, once and for all.
“Streams is the Windows Vista of Google — a legacy it hopes to forget,” MedConfidential’s Sam Smith told us. “The NHS relies on trustworthy suppliers, but companies that move on after breaking things create legacy problems for the NHS, as we saw with wannacry. Google should admit the decision, delete the data, and learn that experimenting on patients is regulated for a reason.”
Despite the Information Commissioner’s Office’s 2017 finding that the Royal Free’s original data-sharing deal with DeepMind was improper, it’s notable that the London Trust stuck with Streams — continuing to pass data to DeepMind.
The original patient data-set that was shared with DeepMind without a valid legal basis was never ordered to be deleted. Nor — presumably has it since been deleted. Hence the call for Google to delete the data now.
Ironically the improperly acquired data should (in theory) finally get deleted — once contractual timeframes for any final back-up purges elapse — but only because it’s Google itself planning to switch off Streams.
The Royal Free confirmed to us that it is still using Streams, even as Google spins the dial on its commercial priorities for the umpteenth time and decides it’s not interested in this particular bit of clinician support, after all.
We put a number of questions to the Trust — including about the deletion of patient data — none of which it responded to.
Instead, two days later, it sent us this one-line statement which raises plenty more questions — saying only that: “The Streams app has not been decommissioned for the Royal Free London and our clinicians continue to use it for the benefit of patients in our hospitals.”
It is not clear how long the Trust will be able to use an app Google is decommissioning. Nor how wise that might be for patient safety — such as if the app won’t get necessary security updates, for example.
We’ve also asked Google how long it will continue to support the Royal Free’s usage — and when it plans to finally switch off the service. As well as which internal group will be responsible for any SLA requests coming from the Royal Free as the Trust continues to use software Google Health is decommissioning — and will update this report with any response. (Earlier a Google spokeswoman told us the Royal Free would continue to use Streams for the ‘near future’ — but she did not offer a specific end date.)
In press reports this month on the Google Health reorg — covering an internal memo first obtained by Business Insider — teams working on various Google health projects were reported to be being split up to other areas, including some set to report into Google’s search and AI teams.
So which Google group will take over responsibility for the handling of the SLA with the Royal Free, as a result of the Google Health reshuffle, is an interesting question.
In earlier comments, Google’s spokeswoman told us the new structure for its reconfigured health efforts — which are still being badged ‘Google Health’ — will encompass all its work in health and wellness, including Fitbit, as well as AI health research, Google Cloud and more.
On Streams specifically, she said the app hasn’t made the cut because when Google assimilated DeepMind Health it decided to focus its efforts on another digital offering for clinicians — called Care Studio — which it’s currently piloting with two US health systems (namely: Ascension & Beth Israel Deaconess Medical Center).
And anyone who’s ever tried to use a Google messaging app will surely have strong feelings of déjà vu on reading that…
DeepMind’s co-founder, meanwhile, appears to have remained blissfully ignorant of Google’s intentions to ditch Streams in favor of Care Studio — tweeting back in 2019 as Google completed the takeover of DeepMind Health that he had been “proud to be part of this journey”, and also touting “huge progress delivered already, and so much more to come for this incredible team”.
In the end, Streams isn’t being ‘supercharged’ (or levelled up to use current faddish political parlance) with AI — as his 2019 blog post had envisaged — Google is simply taking it out of service. Like it did with Reader or Allo or Tango or Google Play Music, or…. well, the list goes on.
Suleyman’s own story contains some wrinkles, too.
He is no longer at DeepMind but has himself been ‘folded into’ Google — joining as a VP of artificial intelligence policy, after initially being placed on an extended leave of absence from DeepMind.
In January, allegations that he had bullied staff were reported by the WSJ. And then, earlier this month, Business Insider expanded on that — reporting follow up allegations that there had been confidential settlements between DeepMind and former employees who had worked under Suleyman and complained about his conduct (although DeepMind denied any knowledge of such settlements).
In a statement to Business Insider, Suleyman apologized for his past behavior — and said that in 2019 he had “accepted feedback that, as a co-founder at DeepMind, I drove people too hard and at times my management style was not constructive”, adding that he had taken time out to start working with a coach and that that process had helped him “reflect, grow and learn personally and professionally”.
We asked Google if Suleyman would like to comment on the demise of Streams — and on his employer’s decision to kill the project — given his high hopes for the project and all the years of work he put into the health push. But the company did not engage with that request.
We also offered Suleyman the chance to comment directly. We’ll update this story if he responds.
The meeting, which also included attendees from the financial and education sectors, was held following months of high-profile cyberattacks against critical infrastructure and several U.S. government agencies, along with a glaring cybersecurity skills gap; according to data from CyberSeek, there are currently almost 500,000 cybersecurity jobs across the U.S that remain unfilled.
“Most of our critical infrastructure is owned and operated by the private sector, and the federal government can’t meet this challenge alone,” Biden said at the start of the meeting. “I’ve invited you all here today because you have the power, the capacity and the responsibility, I believe, to raise the bar on cybersecurity.”
In order to help the U.S. in its fight against a growing number of cyberattacks, Big Tech pledged to invest billions of dollars to strengthen cybersecurity defenses and to train skilled cybersecurity workers.
Apple has vowed to work with its 9,000-plus suppliers in the U.S. to drive “mass adoption” of multi-factor authentication and security training, according to the White House, as well as to establish a new program to drive continuous security improvements throughout the technology supply chain.
Google said it will invest more than $10 billion over the next five years to expand zero-trust programs, help secure the software supply chain and enhance open-source security. The search and ads giant has also pledged to train 100,000 Americans in fields like IT support and data analytics, learning in-demand skills including data privacy and security.
“Robust cybersecurity ultimately depends on having the people to implement it,” said Kent Walker, Google’s global affairs chief. “That includes people with digital skills capable of designing and executing cybersecurity solutions, as well as promoting awareness of cybersecurity risks and protocols among the broader population.”
And, Microsoft said it’s committing $20 billion to integrate cybersecurity by design and deliver “advanced security solutions.” It also announced that it will immediately make available $150 million in technical services to help federal, state and local governments with upgrading security protection, and will expand partnerships with community colleges and nonprofits for cybersecurity training.
Other attendees included Amazon Web Services (AWS), Amazon’s cloud computing arm, and IBM. The former has said it will make its security awareness training available to the public and equip all AWS customers with hardware multi-factor authentication devices, while IBM said it will help to train more than 150,000 people in cybersecurity skills over the next five years.
While many have welcomed Big Tech’s commitments, David Carroll, managing director at Nominet Cyber, told TechCrunch that these latest initiatives set a “powerful precedent” and show “the gloves are well and truly off” — but some within the cybersecurity industry remain skeptical.
“So 500,000 open cybersecurity jobs and almost that same amount or more looking for jobs,” said Khalilah Scott, founder of TechSecChix, a foundation for supporting women in technology, in a tweet. “Make it make sense.”
BreezoMeter has been on a mission to make environmental health hazards accessible to as many people as possible. Through its air quality index (AQI) calculations, the Israel-based company can now identify the quality of air down to a few meters in dozens of countries. A partnership with Apple to include its data into the iOS Weather app along with its own popular apps delivers those metrics to hundreds of millions of users, and an API product allows companies to tap into its dataset for their own purposes.
Right on the heels of a $30 million Series C round a few weeks ago, the company is radially expanding its product from air quality into the real-time detection of wildfire perimeters with its new product, Wildfire Tracker.
The new product will take advantage of the company’s fusion of sensor data, satellite imagery, and local eyewitness reports to be able to identify the edges of wildfires in real-time. “People expect accurate wildfire information just as they expect accurate weather or humidity data,” Ran Korber, CEO and co-founder, said. “It has an immediate effect on their life.” He added further that BreezoMeter wants to “try to connect the dots between climate tech and human health.”
Fire danger zones will be indicated with polygonal boundaries marked in red, and as always, air quality data will be viewable in these zones and in surrounding areas.
BreezoMeter’s air quality maps can show the spread of wildfire pollution easily. Image Credits: BreezoMeter.
Korber emphasized that getting these perimeters accurate across dozens of countries was no easy feat. Sensors can be sparse, particularly in the forests where wildfires ignite. Meanwhile, satellite data that focuses on thermal imaging can be fooled. “We’re looking for abnormalities … many of the times you have these false positives,” Korber said. He gave an example of a large solar panel array which can look very hot with thermal sensors but obviously isn’t a fire.
The identified fire perimeters will be available for free to consumers on BreezoMeter’s air quality map website, and will shortly come to the company’s apps as well. Later this year, these perimeters will be available from the company’s APIs for commercial customers. Korber hopes the API endpoints will give companies like car manufacturers the ability to forewarn drivers that they are approaching a conflagration.
The new feature is just a continuation of BreezoMeter’s long-time expansion of its product. “When we started, it was just air quality … and only forecasting air pollution in Israel,” Korber said. “Almost every year since then, we expanded the product portfolio to new environmental hazards.” He pointed to the addition of pollen in 2018 and the increasingly global nature of the app.
Wildfire detection is an, ahem, hot area these days for VC investors. For example, Cornea is a startup focused on helping firefighters identify and mitigate blazes, while Perimeter wants to help identify boundaries of wildfires and give explicit evacuation instructions complete with maps. As Silicon Valley’s home state of California and much of the world increasingly become a tinderbox for fires, expect more investment and products to enter this area.
Wildfires are burning in countries all around the world. California is dealing with some of the worst wildfires in its history (a superlative that I use essentially every year now) with the Caldor fire and others blazing in the state’s north. Meanwhile, Greece and other Mediterranean nations have been fighting fires for weeks to bring a number of massive blazes under control.
With the climate increasingly warming, millions of homes just in the United States alone are sitting in zones at high risk for wildfires. Insurance companies and governments are putting acute pressure on homeowners to invest more in defending their homes in what is typically dubbed “hardening,” or ensuring that if fires do arrive, a home has the best chance to survive and not spread the disaster further.
SF-based Firemaps has a bold vision for rapidly scaling up and solving the problem of home hardening by making a complicated and time-consuming process as simple as possible.
The company, which was founded just a few months ago (in March), sends out a crew with a drone to survey a homeowner’s house and property if it is in a high-risk fire zone. Within 20 minutes, the team will have generated a high-resolution 3D model of the property down to the centimeter. From there, hardening options are identified and bids are sent out to trade contractors to perform the work on the company’s marketplace.
Once the drone scans a house, Firemaps can create a full CAD model of the structure and the nearby property. Image Credits: Firemaps.
While early, it’s already gotten traction. In addition to hundreds of homeowners who have signed up on its website and a few dozen that have been scanned, Andrew Chen of a16z has led a $5.5 million seed round into the business (the Form D places the round sometime around April). Uber CEO Dara Khosrowshahi and Addition’s Lee Fixel also participated.
Firemaps is led by Jahan Khanna, who co-founded it along with his brother, who has a long-time background in civil engineering, and Rob Moran. Khanna was co-founder and CTO of early ridesharing startup Sidecar, where Moran joined as one of the company’s first employees. The trio spent cycles exploring how to work on climate problems, while staying focused on helping people in the here and now. “We have crossed certain thresholds [with the climate] and we need to get this problem under control,” Khanna said. “We are one part of the solution.”
Over the past few years Khanna and his brother explored opening a solar farm or a solar-powered home in California. “What was wild, whenever we talked to someone, is they said you cannot build anything in California since it will burn down,” Khanna said. “What is kind of the endgame of this?” As they explored fire hardening, they realized that millions of homeowners needed faster and cheaper options, and they needed them sooner rather than later.
While there are dozens of options to harden a home to fire, some popular options include constructing an ember-free zone within a few feet of a home, often by placing gravel made of granite on the ground, as well as ensuring that attic vents, gutters and siding are fireproof and can withstand high temperatures. These options can vary widely in cost, although some local and state governments have created reimbursement programs to allow homeowners to recoup at least some of the expenses of these improvements.
A Firemaps house in 3D model form with typical hardening options and associated prices. Image Credits: Firemaps.
The company’s business model is simple: vetted contractors pay Firemaps to be listed as an option on its platform. Khanna believes that because its drone offers a comprehensive model of a home, contractors will be able to bid for contracts without doing their own site visits. “These contractors are getting these shovel-ready projects, and their acquisition costs are basically zero,” Khanna said.
Long-term, “our operating hypothesis is that building a platform and building these models of homes is inherently valuable,” Khanna said. Right now, the company is launched in California, and the goal for the next year is to “get this model repeatable and scalable and that means doing hundreds of homes per week,” he said.