In just a few weeks, homeschooling has gone from a rarity to a baseline in homes across the country.
Jonah Liss, a 16-year-old student at International Academy of Bloomfield Hills in Michigan, was sent home out of precaution due to the coronavirus outbreak.
While the transition has been okay for Liss, who has used some of the extra time to create a service to help those impacted by COVID-19, he recognized that other students are experiencing some pain points; not everyone has access to the same technology outside of school, so they can’t complete assignments. The school, he says, isn’t giving tests because they have no way to prove students aren’t cheating. And learning doesn’t feel personalized.
“It can be difficult to learn in an environment where there is less structure, direct instruction and ability to ask as many questions as possible,” Liss said. His school is placing emphasis on Google Classroom, Hangouts, Zoom and Khan Academy — all currently free for schools that have been shut down.
Edtech companies are seeing a usage surge because they’re offering services for free or at discounted rates to schools that are scrambling to switch to remote learning. But when students return to campus, many of the hurdles to adopting education technology will persist.
And as edtech startups find their time in the spotlight, these emerging challenges must be addressed before companies can truly convert those free customers into paying ones.
Last week at Stanford, antitrust officials from the U.S. Department of Justice organized a day-long conference that engaged numerous venture capitalists in conversations about big tech. The DOJ wanted to hear from VCs about whether they believe there’s still an opportunity for startups to flourish alongside the likes of Facebook and Google and whether they can anticipate what — if anything — might disrupt the inexorable growth of these giants.
Most of the invited panelists acknowledged there is a problem, but they also said fairly uniformly that they doubted if more regulation was the solution.
Some of the speakers dismissed outright the idea that today’s tech incumbents can’t be outmaneuvered. Sequoia’s Michael Moritz talked about various companies that ruled the world across different decades and later receded into the background, suggesting that we merely need to wait and see which startups will eventually displace today’s giants.
He added that if there’s a real threat lurking anywhere, it isn’t in an overly powerful Google, but rather American high schools that are, according to Moritz, a poor match for their Chinese counterparts. “We’re killing ourselves; we’re killing the future technologists… we’re slowly killing the potential for home-brewed invention.”
Renowned angel investor Ram Shriram similarly downplayed the DOJ’s concerns, saying specifically he didn’t think that “search” as a category could never be again disrupted or that it doesn’t benefit from network effects. He observed that Google itself disrupted numerous search companies when it emerged on the scene in 1998.
Somewhat cynically, we would note that those companies — Lycos, Yahoo, Excite — had a roughly four-year lead over Google at the time, and Google has been massively dominant for nearly all of those 22 years since (because of, yes, its network effects).
Instagram is changing its advertising rules to require political campaigns’ sponsored posts from influencers to use its Branded Content Ads tool that adds a disclosure label of “Paid Partnership With”. The change comes after the Bloomberg presidential campaign paid meme makers to post screenshots that showed him asking them to make him look cool.
Instagram provided this statement to TechCrunch:
“Branded content is different from advertising, but in either case we believe it’s important people know when they’re seeing paid content on our platforms. That’s why we have an Ad Library where anyone can see who paid for an ad and why we require creators to disclose any paid partnerships through our branded content tools. After hearing from multiple campaigns, we agree that there’s a place for branded content in political discussion on our platforms. We’re allowing US-based political candidates to work with creators to run this content, provided the political candidates are authorized and the creators disclose any paid partnerships through our branded content tools.”
Instagram explains to TechCrunch that branded content is different from advertising because Facebook doesn’t receive any payment and it can’t be targeted. If marketers or political campaigns pay to boost the reach of sponsored content, it’s then subject to Instagram’s ad policies and goes in its Ad Library for seven years.
But previously, Instagram banned political operations from running branded content because the policies that applied to it covered all monetization mediums on Instagram, including ad breaks and subscriptions that political entities are blocked from using. Facebook didn’t want to be seen as giving monetary contributions to campaigns, especially as the company tries to appear politically neutral.
Yet now Instagram is changing the rule and not just allowing but requiring political campaigns to use the Branded Content Ads tool when paying influencers to post sponsored content. That’s because Instagram and Facebook don’t get paid for these sponsorships. It’s now asking all sponsorships, including the Bloomberg memes retroactively, to be disclosed with a label using this tool. That would add a “Paid Partnership with Bloomberg 2020” warning to posts and Stories that the campaign paid meme pages and other influencers to post. This rule change is starting in the US today.
Instagram was moved to make the change after Bloomberg DM memes flooded the site. The New York Times’ Taylor Lorenz reported that the Bloomberg campaign worked with Meme 2020, an organization led by the head of the “FuckJerry” account’s Jerry Media company Mick Purzycki, to recruit and pay the influencers. Their posts made it look like Bloomberg himself had Direct Messaged the creators asking them to post stuff that would make him relevant to a younger audience.
Part of the campaign’s initial success came because users weren’t fully sure if the influencers’ posts were jokes or ads, even if they were disclosed with #ad or “yes this is really sponsored by @MikeBloomberg”. There’s already been a swift souring of public perception on the meme campaign, with some users calling it cringey and posting memes of Bernie Sanders, who’s anti-corporate stance pits him opposite of Bloomberg.
The change comes just two days after the FTC voted to review influencer marketing guidelines and decide if advertisers and platforms might be liable for penalties for failing to mandate disclosure.
At least the Democratic field of candidates is finally waking up to the power of memes to reach a demographic largely removed from cable television and rally speeches. The Trump campaign has used digital media to great effect, exploiting a lack of rules against misinformation in Facebook ads to make inaccurate claims and raise money. With all his baked in media exposure from being President already, the Democratic challengers need all the impressions they can get.
Every once in a while, an organization implodes so fantastically that it’s hard in retrospect to understand why another outcome once seemed possible. With every passing day, SoftBank — which shook up the investing world with the largest investment fund ever pooled, then seemed to use its capital as a weapon — looks to become one such operation.
The very newest development centers on the departure of Michael Ronen, a former Goldman Sachs banker who joined SoftBank in 2017 and became one of five U.S. managing partners at SoftBank’s $100 billion Vision Fund, where he led the firm’s transportation investments, including in Getaround, GM Cruise, Nuro, and Park Jockey.
Ronen tells the Financial Times that he has been “negotiating the terms of my anticipated departure” in recent weeks. Meanwhile, sources tell the FT that his departure is tied directly to the failure of SoftBank to raise any outside investment for the company’s second Vision Fund.
The FT further reports that other top lieutenants may also be on their way out, including SoftBank vice chairman Ron Fisher, who has been a part of SoftBank and a close advisor to SoftBank CEO Masayoshi Son since 1995.
SoftBank is denying that Fisher is “going anywhere.” We’ve meanwhile reached out to Ronen for further information, as well as to the Vision Fund’s press relations office.
It was in mid-summer last year that the first hints of trouble began to surface publicly. Son himself began seeding doubt when he announced in July that the Japanese conglomerate’s second Vision Fund had reached $108 billion in capital commitments based on a series of memoranda of understandings.
It didn’t take long for industry observers to start wondering whether the money was real. When we asked SoftBank why it was counting unrealized gains as profits in its first fund, for example, or whether investors in its first fund would accept SoftBank’s plans to use proceeds from its first fund to invest capital in a second vehicle (mixing money from different funds is not kosher in the world of VC), two spokespersons declined to answer our specific questions. Instead, we were pointed to an online presentation by Son on SoftBank’s investor relations page that answered none of our queries.
Soon after, the WSJ reported that SoftBank planned to loan employees up to $20 billion so that they could buy stakes in its second fund. Again, the news raised eyebrows. Yet it was only when the Financial Times reported that some executives were being encouraged to borrow more than 10 times their base salary — and that some employees worried that opting out might hurt their career — that the degree to which SoftBank was struggling became clearer.
Even still, few could have anticipated the speed with which the crown jewel of SoftBank’s first Vision Fund — WeWork — would fall apart as investment. Though the co-working giant was thought wildly overvalued by many in both the real estate and tech industries, it was difficult to imagine a scenario in which SoftBank — to rescue its more than $18 billion investment in WeWork — would pay so richly to get rid of its founding CEO, scuttle its IPO plans, then try to run the company itself.
As it happens, those who’ve worked with Son in the past seem least surprised by what’s happening now. Last fall, a former associate didn’t mince words when it came to Son, telling us, not for attribution, “If you are dumb enough to hand your wallet to him, he’s a genius at making money on his own terms for him and by extension, I guess, a small circle of shareholders and advisers. But if you [disagree with him in way], you are chum.”
Another source described the first Vision Fund, which relied heavily on debt and promised its providers an annual coupon of 7%, as “akin to a check-kiting scheme, where you hope someone isn’t cashing that check at the bank before you’ve spent the money and earned more and can put it back.”
Son has “parasitized Japanese banks,” added this person. (In November, the Nikkei Asian Review reported that while SoftBank was in talks to raise billions of dollars more from Japanese banks, having lent so much money to SoftBank already, they were nervous about taking on more risk.)
Meanwhile, the first Vision Fund’s biggest backers — Saudi Arabia and Abu Dhabi — which represented $45 billion and $15 billion of its capital commitments, respectively — have become concerned about the perception of pouring any more money into SoftBank funds following “flops from the first Vision Fund,” reports the FT.
It’s a very different picture than one drawn by Vision Fund investor Caroline Brochada, who we interviewed on stage in December, and who was asked whether WeWork and other challenges would change either the scope of the mandate of the Vision Fund in 2020.
At the time, just two months ago, she suggested it would not. “The mission of investing in great teams, in mission-driven companies that are changing the way people live, will not change . . . SoftBank and Masa himself are very long-term thinkers, and hopefully, the message that founders took away from WeWork and the way SoftBank behaved after the IPO didn’t go forward is that we really will work with founders for a long time, and we will hold stock in the public markets, because we believe that this is a 10-, 20-, 100-year vision.”
Brochado, who joined SoftBank a year ago from Atomico, added at the time: “[T]he Vision Fund is two years old. And people sometimes forget that. So I think there’s a lot of learnings. There is definitely going to be a way forward. And the mission will remain the same.”
And yet the mission may be too challenged in the short term to be a viable one. In addition to WeWork, SoftBank hasn’t seen the return it was expecting from Uber, whose market cap is currently $65 billion. (It invested in the company when it was still privately held at a $49 billion valuation, buying up a little more than 16 percent of the company’s shares.) SoftBank parted ways in December with the dog-walking company Wag, into which it had poured $300 million just two years earlier.
Oyo, a SoftBank-backed, India-based startup with ambitions to become the world’s largest hotel chain, is also part of a “bubble that will burst,” according to a former operations manager at the company who talked earlier this month with the New York Times.
Yet another problem for Son: his high-profile wager on Sprint, the nation’s fourth-largest wireless provider, which he needs desperately to merge with T-Mobile, but which is stuck in a kind of limbo, sued by 13 state attorneys general and the District of Columbia over concerns that the merger would hurt competition and raise prices for users’ cell service.
In the meantime, layoffs at companies that raised huge amounts from the Vision Fund have become routine, including at Oyo, Rappi, Getaround, Zume, and Fair, to name just a handful.
All have led to a growing number of questions over the deal-making prowess of Son, who is the ultimate arbiter of all deals that SoftBank funds.
As another U.S. managing director, Jeff Housenbold, explained to us at a 2018 event we’d hosted, “Masa meets every single entrepreneur who we invest in, which is phenomenal because he’s brilliant . . . he has amazing pattern recognition. But what’s really amazing is, he’s fearless. He’ll sit with an entrepreneur and go, ‘I really love that concept. Have you thought about what if we remove barriers?’ Or, ‘What if capital wasn’t a restriction?’” Housenbold continued, “If Masa says, “Yes, I’m intrigued, move forward,’ then we go to our formal investment committee to do confirmatory due diligence, then we close the deal.”
Now, those questions about his processes look to grow louder with Ronen’s departure. In fact, they might become deafening if not for SoftBank’s 25% stake in Alibaba, whose market cap has reached $600 billion. It was Son’s discerning $20 million bet on the Chinese conglomerate that began earning him accolades as a visionary.
For now, at least, it’s looking like an outlier in a sea of other decisions that have put his reputation to the test.
Mental wellness unicorn Calm has more than two million subscribers to its mindfulness and meditation app, raising over $140 million in funding to bring “mental fitness” practices into the daily lives of mainstream consumers. Anchored in a range of audio courses, the company has expanded to video and even book publishing.
Co-founders and co-CEOs Michael Acton Smith and Alex Tew previously founded gaming startup Mind Candy and news aggregator Pop Jam. For Calm, the duo drew from their experience marketing digital products to figure out a business model, content strategy and long list of celebrity collaborations.
In a recent conversation with Acton Smith, I dug into the company’s strategy and the case study it can provide to other entrepreneurs. Here is the transcript of our discussion, edited for length and clarity.
TechCrunch: How do you view this market of meditation and wellness content? When the internet is filled with free content related to well-being and you have competitors like Headspace, how do you differentiate Calm?
Michael Acton Smith: There’s a mistaken view that Calm is a meditation app. We did start as a meditation app, but we think of what we offer more as mental fitness: How can we help people better understand their own minds? The brain is incredibly complex and doesn’t come with an instruction manual.
It’s hard to put a positive spin on terrible situation, but that didn’t stop Goldman Sachs CEO David Solomon earlier today. Asked during a session at the World Economic Forum in Davos about WeWork’s yanked IPO in September, Solomon suggested it was proof that the listing process works, despite that the CFO of Goldman — one of the offering’s underwriters — disclosed last fall that the pulled deal cost the bank a whopping $80 million.
Reuters was on the scene, reporting that Solomon acknowledged the process was “not as pretty as everybody would like it to be” yet also eschewing responsibility, telling those gathered that the “banks were not valuing [WeWork]. Banks give you a model. You say to the company, ‘Well, if you can prove to us that the model actually does what it does, then it’s possible that the company is worth this in the public markets,'” Solomon said.
Investment banks had reportedly courted WeWork’s business by discussing a variety of figures that led cofounder Adam Neumann to overestimate how it might be received by public market shareholders. According to the New York Times, in 2018, JPMorgan was telling Neumann that it could find buyers to value the company at more than $60 billion; while Goldman Sachs said $90 billion was a possibility, and Morgan Stanley — which has been assigned as lead underwriter of many of the buzziest tech offerings over the last decade — reportedly posited that even more than $100 billion was possible.
Ultimately, the IPO was canceled several weeks after Neumann was asked to resign and WeWork’s biggest investor, SoftBank — which itself nearly tripled the company’s private market valuation across funding rounds — stepped in to ostensibly rescue the company (and its now $18.5 billion investment in it.
Solomon isn’t the only one defending some of the frustrating logic of IPO pricing in recent years. This editor sat down in November with Morgan Stanley’s head tech banker Michael Grimes, who has been called “Wall Street’s Silicon Valley whisperer” for landing a seemingly endless string of coveted deals for the bank.
Because Morgan Stanley pulled out of the process of underwriting WeWork’s IPO (reportedly after WeWork rejected its pitch to be the company’s lead underwriter), we talked with Grimes instead about Uber, whose offering last year Morgan Stanley did lead. We asked how Uber could have been told reportedly by investment bankers that its valuation might be as high as $120 billion in an IPO when, as we now know, public market shareholders deemed it worth far less. (Its current market cap is roughly half that amount, at $64 billion.)
Grimes said matter-of-factly that price estimates can routinely be all over the place, explaining that “if you look at how companies are valued, at any given point of time right now, public companies with growth prospects and margins that are not yet at their mature margin, I think you’ll find on average price targets by either analysts who work at banks or buy-side investors that can be 100%, 200% and 300% different from low to high.”
He called that a “typical spread.”
The reason, he said, had to do with each bank’s or analyst’s guess at “penetration.”
“Let’s say, what, 100 million people or so [worldwide] have have been monthly active users of Uber, somewhere in that range,” said Grimes during our sit-down “What percentage of the population is that? Less than 1% or something. Is that 1% going to be 2%, 3%, 6%, 10%, 20%? Half a percent, because people stop using it and turn instead to some flying [taxi]?
“So if you take all those variable, possible outcomes, you get huge variability in outcome. So it’s easy to say that everything should trade the same every day, but [look at what happened with Google]. You have some people saying maybe that is an outcome that can happen here for companies, or maybe it won’t. Maybe they’ll [hit their] saturation [point] or face new competitors.”
Grimes then turned the tables on reporters and others in the industry who wonder how banks could get the numbers so wrong, with Uber but also with a lot of other companies. “It’s really easy to be a pundit and say, ‘It should be higher’ or ‘It should be lower,’” Grimes said. “But investors are making decisions about that every day.”
Besides, he added, “We think our job is to be realistically optimistic” about where things will land. “If tech stops changing everything and software stops eating the world, there probably would be less of an optimistic bias.”
Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.
In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.
Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.
It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).
“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”
For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)
Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.
Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.
The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.
Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)
The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.
It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)
Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.
The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.
While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.
In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.
For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.
“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.
The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.
Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.
You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.
But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”.
And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.
What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.
Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)
At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.
But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.
Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.
And a ban would be far harder for platform giants to simply bend to their will.
So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.
— Jonathan Senchyne (@jsench) January 16, 2020
In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.
Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:
In a series of three interviews, I’m exploring the startup opportunities in both of these spaces in greater depth. First, Michael Dempsey, a partner at VC firm Compound who has blogged extensively about digital characters, avatars and animation, offers his perspective as an investor hunting for startup opportunities within these spaces.