The specter of constant surveillance hangs over all of us in ways we don’t even fully understand, but it is also possible to turn the tools of the watchers against them. Forensic Architecture is exhibiting several long-term projects at the Museum of Art and Design in Miami that use the omnipresence of technology as a way to expose crimes and violence by oppressive states.
Over seven years Eyal Weizman and his team have performed dozens of investigations into instances of state-sponsored violence, from drone strikes to police brutality. Often these events are minimized at all levels by the state actors involved, denied or no-commented until the media cycle moves on. But sometimes technology provides ways to prove a crime was committed and occasionally even cause the perpetrator to admit it — hoisted by their own electronic petard.
Sometimes this is actual state-deployed kit, like body cameras or public records, but it also uses private information co-opted by state authorities to track individuals, like digital metadata from messages and location services.
For instance, when Chicago police shot and killed Harith Augustus in 2018, the department released some footage of the incident, saying that it “speaks for itself.” But Forensic Architecture’s close inspection of the body cam footage and cross reference with other materials makes it obvious that the police violated numerous rules (including in the operation of the body cams) in their interaction with him, escalating the situation and ultimately killing a man who by all indications — except the official account — was attempting to comply. It also helped additional footage see the light which was either mistakenly or deliberately left out of a FOIA release.
In another situation, a trio of Turkish migrants seeking asylum in Greece were shown, by analysis of their WhatsApp messages, images and location and time stamps, to have entered Greece and been detained by Greek authorities before being “pushed back” by unidentified masked escorts, having been afforded no legal recourse to asylum processes or the like. This is one example of several recently that appear to be private actors working in concert with the state to deprive people of their rights.
I spoke with Weizman before the opening of this exhibition in Miami, where some of the latest investigations are being shown off. (Shortly after our interview he would be denied entry to the U.S. to attend the opening, with a border agent explaining that this denial was algorithmically determined; we’ll come back to this.)
The original motive for creating Forensic Architecture, he explained, was to elicit testimony from those who had experienced state violence.
“We started using this technique when in 2013 we met a drone survivor, a German woman who had survived a drone strike in Pakistan that killed several relatives of hers,” Weizman explained. “She has wanted to deliver testimony in a trial regarding the drone strike, but like many survivors her memory was affected by the trauma she has experienced. The memory of the event was scattered, it had lacunae and repetitions, as you often have with trauma. And her condition is like many who have to speak out in human rights work: The closer you get to the core of the testimony, the description of the event itself, the more it escapes you.”
The approach they took to help this woman, and later many others, jog her own memory, was something called “situated testimony.” Essentially it amounts to exposing the person to media from the experience, allowing them to “situate” themselves in that moment. This is not without its own risks.
“Of course you must have the appropriate trauma professionals present,” Weizman said. “We only bring people who are willing to participate and perform the experience of being again at the scene as it happened. Sometimes details that would not occur to someone to be important come out.”
A digital reconstruction of a drone strike’s explosion was recreated physically for another exhibition.
But it’s surprising how effective it can be, he explained. One case exposed American involvement hitherto undisclosed.
“We were researching a Cameroon special forces detention center, torture and death in custody occurred, for Amnesty International,” he explained. “We asked detainees to describe to us simply what was outside the window. How many trees, or what else they could see.” Such testimony could help place their exact location and orientation in the building and lead to more evidence, such as cameras across the street facing that room.
“And sitting in a room based on a satellite image of the area, one told us: ‘yes, there were two trees, and one was over by the fence where the American soldiers were jogging.’ We said, ‘wait, what, can you repeat that?’ They had been interviewed many times and never mentioned American soldiers,” Weizman recalled. “When we heard there were American personnel, we found Facebook posts from service personnel who were there, and were able to force the transfer of prisoners there to another prison.”
Weizman noted that the organization only goes where help is requested, and does not pursue what might be called private injustices, as opposed to public.
“We require an invitation, to be invited into this by communities that invite state violence. We’re not a forensic agency, we’re a counter-forensic agency. We only investigate crimes by state authorities.”
In the latest of these investigations, being exhibited for the first time at MOAD, the team used virtual reality for the first time in their situated testimony work. While VR has proven to be somewhat less compelling than most would like on the entertainment front, it turns out to work quite well in this context.
“We worked with an Israeli whistleblower soldier regarding testimony of violence he committed against Palestinians,” Weizman said. “It has been denied by the Israeli prime minister and others, but we have been able to find Palestinian witnesses to that case, and put them in VR so we could cross reference them. We had victim and perpetrator testifying to the same crime in the same space, and their testimonies can be overlaid on each other.”
Dean Issacharoff — the soldier accused by Israel of giving false testimony — describes the moment he illegally beat a Palestinian civilian. (Caption and image courtesy of Forensic Architecture)
One thing about VR is that the sense of space is very real; if the environment is built accurately, things like sight-lines and positional audio can be extremely true to life. If someone says they saw the event occur here, but the state says it was here, and a camera this far away saw it at this angle… these incomplete accounts can be added together to form something more factual, and assembled into a virtual environment.
“That project is the first use of VR interviews we have done — it’s still in a very experimental stage. But it didn’t involve fatalities, so the level of trauma was a bit more controlled,” Weizman explained. “We have learned that the level and precision we can arrive at in reconstructing an incident is unparalleled. It’s almost tactile; you can walk through the space, you can see every object: guns, cars, civilians. And you can populate it until the witness is satisfied that this is what they experienced. I think this is a first, definitely in forensic terms, as far as uses of VR.”
A photogrammetry-based reconstruction of the area of Hebron where the incident took place.
In video of the situated testimony, you can see witnesses describing locations more exactly than they likely or even possibly could have without the virtual reconstruction. “I stood with the men at exactly that point,” says one, gesturing toward an object he recognized, then pointing upwards: “There were soldiers on the roof of this building, where the writing is.”
Of course it is not the digital recreation itself that forces the hand of those involved, but the incontrovertible facts it exposes. No one would ever have known that the U.S. had a presence at that detainment facility, and the country had no reason to say it did. The testimony wouldn’t even have been enough, except that it put the investigators onto a line of inquiry that produced data. And in the case of the Israeli whistleblower, the situated testimony defies official accounts that the organization he represented had lied about the incident.
Sophie Landres, MOAD’s curator of Public Programs and Education, was eager to add that the museum is not hosting this exhibit as a way to highlight how wonderful technology is. It’s important to put the technology and its uses in context rather than try to dazzle people with its capabilities. You may find yourself playing into someone else’s agenda that way.
“For museum audiences, this might be one of their first encounters with VR deployed in this way. The companies that manufacture these technologies know that people will have their first experiences with this tech in a cultural or entertainment contrast, and they’re looking for us to put a friendly face on these technologies that have been created to enable war and surveillance capitalism,” she told me. “But we’re not interested in having our museum be a showcase for product placement without having a serious conversation about it. It’s a place where artists embrace new technologies, but also where they can turn it towards existing power structures.”
Boots on backs mean this not an advertisement for VR headsets or 3D modeling tools.
She cited a tongue-in-cheek definition of “mixed reality” referring to both digital crossover into the real world and the deliberate obfuscation of the truth at a greater scale.
“On the one hand you have mixing the digital world and the real, and on the other you have the mixed reality of the media environment, where there’s no agreement on reality and all these misinformation campaigns. What’s important about Forensic Architecture is they’re not just presenting evidence of the facts, but also the process used to arrive at these truth claims, and that’s extremely important.”
In openly presenting the means as well as the ends, Weizman and his team avoid succumbing to what he calls the “dark epistemology” of the present post-truth era.
As mentioned earlier, Weizman was denied entry to the U.S. for reasons unknown, but possibly related to the network of politically active people with whom he has associated for the sake of his work. Disturbingly, his wife and children were also stopped while entering the states a day before him and separated at the airport for questioning.
In a statement issued publicly afterwards, Weizman dissected the event.
In my interview the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled… I was asked to supply the Embassy with additional information, including fifteen years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.
This much we know: we are being electronically monitored for a set of connections – the network of associations, people, places, calls, and transactions – that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders, and staff are being targeted by the US government as security threats.
This incident exemplifies – albeit in a far less intense manner and at a much less drastic scale – critical aspects of the “arbitrary logic of the border” that our exhibition seeks to expose. The racialized violations of the rights of migrants at the US southern border are of course much more serious and brutal than the procedural difficulties a UK national may experience, and these migrants have very limited avenues for accountability when contesting the violence of the US border.
The works being exhibited, he said, “seek to demonstrate that we can invert the forensic gaze and turn it against the actors — police, militaries, secret services, border agencies — that usually seek to monopolize information. But in employing the counter-forensic gaze one is also exposed to higher-level monitoring by the very state agencies investigated.”
Forensic Architecture’s investigations are ongoing; you can keep up with them at the organization’s website. And if you’re in Miami, drop by MOAD to see some of the work firsthand.
Max Q is a new weekly newsletter all about space. Sign up here to receive it weekly on Sundays in your inbox.
Busy week for SpaceX — across funding, space tourism and next-gen spacecraft. There’s also a space station resupply mission coming up that it’s getting ready for, and signs (this time literally) continue to suggest that its first human spaceflight mission is imminent.
Katherine Johnson, a mathematician who defied prejudice in the ’50s and ’60s to help NASA send the first men to the moon, has died at the age of 101. She was a pioneer, a role model and an instrumental part of America’s space program, and she will be dearly missed.
SpaceX is serious about iteration — its strategy of building (and failing — and learning from its failures) fast is in full effect for its Starship development program. Elon Musk said on Twitter this week that the plan is to build them as frequently as possible with significant improvements between each successive spacecraft, with the aim of going through two or three iterations before flying an orbital mission later this year.
The still-private SpaceX is going back to investors for more cash, likely to help it with the expensive proposition of building a bunch of Starships in rapid succession essentially by hand. It’s said to be seeking $250 million in a round that could close as early as mid-March, according to a CNBC report.
One side of SpaceX’s business that isn’t necessarily as obvious as its commercial cargo launch services is the space tourism angle. This week, the company announced a partnership with Space Adventures, the same firm that has arranged paid trips to the Space Station for private citizens aboard Soyuz capsules. The first of these trips, which won’t go to the ISS but instead will fly up to a higher orbit, take a trip around Earth and come back, is set to take off as early as next year. And if you have to ask about the price, you probably can’t afford it.
The ISS gets a new platform next month that can support attached payloads — up to a dozen — from research partners, including academic institutions and private companies. It’ll go up aboard SpaceX’s next resupply mission for the station, which is currently targeting liftoff on March 2. Also, Adidas is sending up a machine that makes its BOOST shoe soles, just to see how it works in space.
Japan is sending a mission to Phobos and Deimos to study the two moons of Mars, using a probe that will orbit the Red Planet’s natural satellites loaded with sensors. It’ll also carry a small lander, that will itself deploy an even smaller rover, which will study the surface of Phobos directly. If all goes to plan, it’ll collect a sample and bring that back to Earth for further study here.
It turns out that SpaceX, not Snap, may be the most important young technology company for developing the Los Angeles startup ecosystem. Jon Shieber documents how SpaceX alum have gone forth and built a number of companies in the area that have gone on to raise big cash, as well as very young startups that have had a promising beginning. Extra Crunch subscription required.
Yes, LA has a bustling space tech ecosystem. But communications satellite startup Kepler calls Canada home, and it recently made the interesting decision to build its small satellites in-house — in its own facility in downtown Toronto. Founder and CEO Mina Mitry tells me why that’s the best choice for his company. Extra Crunch subscription required.
Financial services startups raised less money in 2019 than they did in 2018 as VC firms looked to back late stage firms and focused on developing markets, a new report has revealed.
According to research firm CB Insights’ annual report published this week, fintech startups across the world raised $33.9 billion* in total last year across 1,912 deals*, down from $40.8 billion they picked up by participating in 2,049 deals the year before.
It’s a comprehensive report, which we recommend you read in full here (your email is required to access it), but below are some of the key takeaways.
Early-stage deals dropped to a 12-quarter low as deal share globally shifts to mid- and late-stages (CB Insights)
The fintech market globally today has 67 unicorns as of earlier this month (CB Insights)
2019 saw 83 mega-rounds totaling $17.2B, a record year in every market except Europe
*CB Insights report includes a $666 million financing round of Paytm . It was incorrectly reported by some news outlets and the $666 million raise was part of the $1 billion round the Indian startup had revealed weeks prior. We have adjusted the data accordingly.
As Samsung (re)unveiled its clamshell folding phone last week, I kept seeing the same question pop up amongst my social circles: why?
I was wondering the same thing myself, to be honest. I’m not sure even Samsung knows; they’d win me over by the end, but only somewhat. The halfway-folded, laptop-style “Flex Mode” allows you to place the phone on a table for hands-free video calling. That’s pretty neat, I guess. But… is that it?
The best answer to “why?” I’ve come up with so far isn’t a very satisfying one: Because they can (maybe). And because they sort of need to do something.
Let’s time-travel back to the early 2000s. Phones were weird, varied and no manufacturers really knew what was going to work. We had basic flip phones and Nokia’s indestructible bricks, but we also had phones that swiveled, slid and included chunky physical keyboards that seemed absolutely crucial. The Sidekick! LG Chocolate! BlackBerry Pearl! Most were pretty bad by today’s standards, but it was at least easy to tell one model from the next.
(Photo by Kim Kulish/Corbis via Getty Images)
Then came the iPhone in 2007; a rectangular glass slab defined less by physical buttons and switches and more by the software that powered it. The device itself, a silhouette. There was hesitation to this formula, initially; the first Android phones shipped with swiveling keyboards, trackballs and various sliding pads. As iPhone sales grew, everyone else’s buttons, sliders and keyboards were boiled away as designers emulated the iPhone’s form factor. The best answer, it seemed, was a simple one.
Twelve years later, everything has become the same. Phones have become… boring. When everyone is trying to build a better rectangle, the battle becomes one of hardware specs. Which one has the fastest CPU? The best camera?
TechCrunch Sessions: Robotics + AI brings together a wide group of the ecosystem’s leading minds on March 3 at UC Berkeley. Over 1000+ attendees are expected from all facets of the robotics and artificial intelligence space – investors, students, engineerings, C-levels, technologists, and researchers. We’ve compiled a small list of highlights of attendees’ companies and job titles attending this year’s event below.
STUDENTS & RESEARCHERS FROM:
Did you know that TechCrunch provides a white-glove networking app at all our events called CrunchMatch? You can connect and match with people who meet your specific requirements, message them, and connect right at the conference. How cool is that!?
Want to get in on networking with this caliber of people? Book your $345 General Admission ticket today and save $50 before prices go up at the door. But no one likes going to events alone. Why not bring the whole team? Groups of four or more save 15% on tickets when you book here.
The coronavirus outbreak could result in at least a 3.3% drop — and as high as a 9% dip — in the volume of PCs that will ship globally this year, research firm Canalys reported Thursday evening in its revised projections to clients.
PC shipments will be down between 10.1% to 20.6% in Q1 2020, the firm estimated. The impact will remain visible in Q2, when the shipments are expected to drop between 8.9% (best case scenario, per Canalys) and 23.4% (worst case scenario), it said.
In the best case scenario, the outbreak would mean 382 million units will ship in 2020, down 3.4% from 396 million last year.
The worst case makes a deeper dent, stating that about 362 million units will ship this year, down 8.5% from last year.
“In the best-case scenario, production levels are expected to revert to full capacity by April 2020, hence the biggest hit will be to sell-in shipments in the first two quarters, with the market recovering in Q3 and Q4,” the firm said.
“Thus, worldwide PC market shipments are expected to decline 3.4% year on year in 2020, with Q1 2020 down by 10% and Q2 2020 by 9%. PC market supply will normalize by Q3 2020. On a yearly basis, Canalys expects the worldwide PC market will slowly begin its recovery starting in 2021.”
The worst case scenario assumes that production levels will not return to their full capacity by June 2020. “Under the assumptions of this scenario, production and demand levels in China will take even longer to recover and Q2 will suffer a decline on a par with Q1 as a consequence. It will be as late as Q4 2020 until we see a market recovery.”
In either of the scenarios, China, one of the world’s largest PC markets, will be most impacted. In worst case scenarios, “the Chinese market will suffer heavily in 2020 under this scenario, with a 12% year-on-year decline over 2019, and subsequent stabilization taking even longer, with 2021 forecast shipments lagging 6 million behind the best-case scenario. The expected CAGR between 2021 and 2024 in China is 6.3%,” Canalys.
China, the global hub for production and supply chain, moved to contain the impact of coronavirus by first extending the official Lunar New Year holidays, which was followed by stringent travel restrictions to keep citizens safe. “This resulted in a significant drop in offline retail traffic and a dramatic fall in consumer purchases,” Canalys analysts said.
The outbreak has also resulted in supply shortages of components such as PCBs and memory in China and other markets. “Likewise, channel partners have received notifications from key PC vendors over the last two weeks that their PC shipments and replacement parts can be expected to arrive in up to 14 weeks – over three times the usual delivery time – depending on where partners are located,” the firm said.
“Technology vendors and channel partners in the Asia Pacific region face the unexpected challenge of coping with the sudden outbreak of COVID-19 (coronavirus). The crisis was largely unforeseen, even in mid-January. Most leaders this year were anticipating disruption from political instability and natural disasters, not an epidemic,” wrote Sharon Hiu, an analyst at Canalys in a separate report.
The outbreak has impacted several more industries, including smartphones, automobiles, television, smart speakers, and video game consoles.
Foxconn, a key manufacturer for Apple, said on Thursday that its 2020 revenue will be impacted by Wuhan coronavirus. The firm said its factories in India, Vietnam, and Mexico are fully loaded and it is planning to expand overseas.
Earlier this month, Apple said it does not expect to meet revenue guidance for March quarter due to constrained iPhone supply and low demand due to the store closures in China.
The US giant is expected to miss its schedule for mass producing a widely rumored affordable iPhone, while inventories for existing models could remain low until April or longer, Nikkei Asian Review reported on Wednesday.
“The Mandalorian” was a pretty good show. On that most people seem to agree. But while a successful live-action Star Wars TV series is important in its own right, the way this particular show was made represents a far greater change, perhaps the most important since the green screen. The cutting edge tech (literally) behind “The Mandalorian” creates a new standard and paradigm for media — and the audience will be none the wiser.
What is this magical new technology? It’s an evolution of a technique that’s been in use for nearly a century in one form or another: displaying a live image behind the actors. But the advance is not in the idea but the execution: a confluence of technologies that redefines “virtual production” and will empower a new generation of creators.
As detailed in an extensive report in American Cinematographer Magazine (I’ve been chasing this story for some time but suspected this venerable trade publication would get the drop on me), the production process of “The Mandalorian” is completely unlike any before, and it’s hard to imagine any major film production not using the technology going forward.
“So what the hell is it?” I hear you asking.
Meet “The Volume.”
Formally called Stagecraft, it’s 20 feet tall, 270 degrees around, and 75 feet across — the largest and most sophisticated virtual filmmaking environment yet made. ILM just today publicly released a behind-the-scenes video of the system in use as well as a number of new details about it.
In filmmaking terms, a “volume” generally refers to a space where motion capture and compositing take place. Some volumes are big and built into sets, as you might have seen in behind-the-scenes footage of Marvel or Star Wars movies. Some are smaller, plainer affairs where the motions of the actors behind CG characters play out their roles.
But they generally have one thing in common: They’re static. Giant, bright green, blank expanses.
Does that look like fun to shoot in?
One of the most difficult things for an actor in modern filmmaking is getting into character while surrounded by green walls, foam blocks indicating obstacles to be painted in later, and people with mocap dots on their face and suits with ping-pong balls attached. Not to mention everything has green reflections that need to be lit or colored out.
Advances some time ago (think prequels-era Star Wars) enabled cameras to display a rough pre-visualization of what the final film would look like, instantly substituting CG backgrounds and characters onto monitors. Sure, that helps with composition and camera movement, but the world of the film isn’t there, the way it is with practical sets and on-site shoots.
Practical effects were a deliberate choice for “The Child” (AKA Baby Yoda) as well.
What’s more, because of the limitations in rendering CG content, the movements of the camera are often restricted to a dolly track or a few pre-selected shots for which the content (and lighting, as we’ll see) has been prepared.
This particular volume, called Stagecraft by ILM, the company that put it together, is not static. The background is a set of enormous LED screens such as you might have seen on stage at conferences and concerts. The Stagecraft volume is bigger than any of those — but more importantly, it’s smarter.
See, it’s not enough to just show an image behind the actors. Filmmakers have been doing that with projected backgrounds since the silent era! And that’s fine if you just want to have a fake view out of a studio window or fake a location behind a static shot. The problem arises when you want to do anything more fancy than that, like move the camera. Because when the camera moves, it immediately becomes clear that the background is a flat image.
The innovation in Stagecraft and other, smaller LED walls (the more general term for these backgrounds) is not only that the image shown is generated live in photorealistic 3D by powerful GPUs, but that 3D scene is directly affected by the movements and settings of the camera. If the camera moves to the right, the image alters just as if it was a real scene.
This is remarkably hard to achieve. In order for it to work the camera must send its real-time position and orientation to, essentially, a beast of a gaming PC, since this and other setups like it generally run on the Unreal engine. This must take that movement and render it exactly in the 3D environment, with attendant changes to perspective, lighting, distortion, depth of field and so on — all fast enough so that those changes can be shown on the giant wall a fraction of a second later. After all, if the movement lagged even by a few frames it would be noticeable to even the most naive viewer.
Yet fully half of the scenes in The Mandalorian were shot within Stagecraft, and my guess is no one had any idea. Interior, exterior, alien worlds or spaceship cockpits, all used this giant volume for one purpose or another.
There are innumerable technological advances that have contributed to this; The Mandalorian could not have been made as it was five years ago. The walls weren’t ready; The rendering tech wasn’t ready; The tracking wasn’t ready — nothing was ready. But it’s ready now.
It must be mentioned that Jon Favreau has been a driving force behind this filmmaking method for years now; Films like remake of The Lion King were in some ways tech tryouts for The Mandalorian. Combined with advances made by James Cameron in virtual filmmaking and of course the indefatigable Andy Serkis’s work in motion capture, this kind of production is only just now becoming realistic due to a confluence of circumstances.
Of course Stagecraft is probably also the most expensive and complex production environments ever used. But what it adds in technological overhead (and there’s a lot) it more than pays back in all kinds of benefits.
For one thing, it nearly eliminates on-location shooting, which is phenomenally expensive and time-consuming. Instead of going to Tunisia to get those wide-open desert shots, you can build a sandy set and put a photorealistic desert behind the actors. You can even combine these ideas for the best of both worlds: Send a team to scout locations in Tunisia and capture them in high-definition 3D to be used as a virtual background.
This last option produces an amazing secondary benefit: Reshoots are way easier. If you filmed at a bar in Santa Monica and changes to the dialogue mean you have to shoot the scene over again, no need to wrangle permits and painstakingly light the bar again. Instead, the first time you’re there, you carefully capture the whole scene with the exact lighting and props you had there the first time and use that as a virtual background for the reshoots.
The fact that many effects and backgrounds can be rendered ahead of time and shot in-camera rather than composited in later saves a lot of time and money. It also streamlines the creative process, with decisions able to be made on the spot by the filmmakers and actors, since the volume is reactive to their needs, not vice versa.
Lighting is another thing that is vastly simplified, in some ways at least, by something like Stagecraft. The bright LED wall can provide a ton of illumination, and because it actually represents the scene, that illumination is accurate to the needs of that scene. A red-lit interior of a space station, and the usual falling sparks and so on, shows red on the faces and of course the highly reflective helmet of the Mandalorian himself. Yet the team can also tweak it, for instance sticking a bright white line high on the LED wall out of sight of the camera but which creates a pleasing highlight on the helmet.
Naturally there are some trade-offs. At 20 feet tall, the volume is large but not so large that wide shots won’t capture the top of it, above which you’d see cameras and a different type of LED (the ceiling is also a display, though not as powerful). This necessitates some rotoscoping and post-production, or limits the angles and lenses one can shoot with — but that’s true of any soundstage or volume.
A shot like this would need a little massaging in post, obviously.
The size of the LEDs, that is of the pixels themselves, also limits how close the camera can get to them, and of course you can’t zoom in on an object for closer inspection. If you’re not careful you’ll end up with Moiré patterns, those stripes you often see on images of screens.
Stagecraft is not the first application of LED walls — they’ve been used for years at smaller scales — but it is certainly by far the most high-profile and The Mandalorian is the first real demonstration of what’s possible using this technology. And believe me, it’s not a one-off.
I’ve been told that nearly every production house is building or experimenting with LED walls of various sizes and types — the benefits are that obvious. TV productions can save money but look just as good. Movies can be shot on more flexible schedules. Actors who hate working in front of green screens may find this more palatable. And you better believe commercials are going to find a way to use these as well.
In short, a few years from now it’s going to be uncommon to find a production that doesn’t use an LED wall in some form or another. This is the new standard.
This is only a general overview of the technology that ILM, Disney, and their many partners and suppliers are working on. In a follow-up article I’ll be sharing more detailed technical information directly from the production team and technologists who created Stagecraft and its attendant systems.
We reported today on KidsGuard, a powerful mobile spyware. Not only is the app secretly installed on thousands of Android phones without the owners’ consent, it also left a server open and unprotected, exposing the data it siphoned off from victims’ infected devices to the internet.
This consumer-grade spyware also goes by “stalkerware.” It’s often used by parents to monitor their kids, but all too frequently it’s repurposed for spying on a spouse without their knowledge or consent. These spying apps are banned from Apple and Google’s app stores, but those bans have done little to curb the spread of these privacy invading apps, which can read a victim’s messages, listen to their phone calls, track their real-time locations, and steal their contacts, photos, videos, and anything else on their phones.
Stalkerware has become so reviled by privacy experts, security researchers, and lawmakers that antivirus makers have promised to do more to better detect the spyware.
TechCrunch obtained a copy of the KidsGuard app. Using a burner Android phone with the microphones and cameras sealed, we tested the spyware’s capabilities. We also uploaded the app to online malware scanning service VirusTotal, which runs uploaded files against dozens of different antivirus makers. Only eight antivirus engines flagged the sample as malicious — including Kaspersky, a member of the Coalition Against Stalkerware, and F-Secure.
Yoong Jien Chiam, a researcher at F-Secure’s Tactical Defense unit, analyzed the app and found it can obtain “GPS locations, account name, on-screen screenshots, keystrokes, and is also accessing photos, videos, and browser history.”
KidsGuard’s developer, ClevGuard, does not make it easy to uninstall the spyware. But this brief guide will help you to identify if the spyware is on your device and how to remove it.
Before you continue, some versions of Android may have slightly different menu options, and you take these following steps at your own risk. This only removes the spyware, and does not delete any data that was uploaded to the cloud.
If you have an Android device, go to Settings > Apps, then scroll down and see if “System Update Service” is listed. This is what ClevGuard calls the app to disguise it from the user. If you see it, it is likely that you are infected with the spyware.
Go to Settings > Security, then Device administrators then untick the “System Update Service” box, then hit Deactivate.
Now, go back to Settings > Security then scroll to Apps with usage access. Once here, tap on “System Update Service” then switch off the permit usage toggle.
Once that is done, go back to Settings > Sound & notification then go to Notification access. Now switch off the toggle for “System Update Service.”
Following those steps, you have effectively disabled the spyware. Now you are able to uninstall it. Go to Settings > Apps and scroll down to “System Update Service.” You should be able to hit Uninstall, but you may need to hit Force Stop first. Tap OK to uninstall the app. This may take a few minutes.
Now that you’ve ridden your device of the spyware, you’ll need to enable a couple of settings that were switched off when your device was first infected. Firstly, go back to Settings > Security then switch off the toggle for Unknown sources. Secondly, go to the Play Store > Play Protect. If you have the option, select Turn on. Once it’s on, you should check to ensure that it “Looks good.”
Customer engagement platform Leanplum today announced that it has raised a $27 million extension to its 2017 $47 million Series D round. This additional funding was led by previous investors Norwest Venture Partners and Shasta Ventures. Kleiner Perkins, Canaan and Launchub also participated in this round, which the company says it will use to bolster its product development and go-to-market efforts. With this, Leanplum has now raised a total of $125 million.
Maybe just as importantly, Leanplum also announced a major shakeup of its executive ranks. The company appointed George Garrick as President and CEO, and Sheri Huston as chief financial officer. Co-founder and former CEO Momchil Kyukchiev will step into the chief product officer role.
Garrick brings a wealth of experience with him, having been the CEO of companies like Flycast, Placeware, Wine.com and Tapjoy . Huston, too, comes into the role with a lot of industry experience as the former CFO at Comscore and LiquiBox. The company is also adding Dynamic Signal founder Russ Fradin to its Board of Directors.
The company describes the changes in its executive ranks as a ‘transition.’
“Many if not most startups at some point in their growth realize that a management transition makes sense as the requirements for the CEO evolve from starting and proving a company, to running and scaling it,” Garrick told us in a statement. “Leanplum’s board and founders agreed that such a transition would be appropriate as Leanplum accelerates its growth phase.”
This was echoed by Kyurkchiev: “George is the right leader for Leanplum. His strong management experience with companies at our stage and in our domain will be essential for Leanplum as we continue to drive growth and expand globally.”
Leanplum says about 2 billion people used apps and websites that use its services in 2019.
As for the new funding, the company says it was simply easier to extend its Series D, which has the same investors as the original D round. “The board felt it was easier and more appropriate to just extend the D round rather than move into the next letter. Also, we wanted to minimize ‘letter creep,’ Garrick said.
Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.
It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.
“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.
Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.
Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.
However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.
We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).
“Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”
Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.
“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”
“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.
Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.
So — in other words — Brexit means, er, trust Google to look after your data.
“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.
“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”
Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.
The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.
So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.
It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)
Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.
Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.
Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…
We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?
Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.
This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.
In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.
Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.
Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.
Though it could be using all that personal stuff to help it build new products it can serve ads alongside.
Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.
The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.
Before it was worth $7.6 billion, the original idea for Robinhood was a stock trading social network. At my kitchen table in San Francisco in 2013, the founders envisioned an app for sharing hot tips to a feed complete with a leaderboard of whose predictions were most accurate. Once they had SEC approval, they pivoted towards the real money maker: letting people buy and sell stocks in the app, and pay to borrow cash to do so.
Now seven years later, Robinhood is subtly taking the first steps back to its start. Today it’s launching Profiles. For now, they let users see analytics about their portfolio like how concentrated they are in stocks vs options vs cryptocurrency, as well across different business sectors. Complete with usernames and a photo, Profiles let you follow self-made or Robinhood provided lists of stocks and other assets.
Profiles could give Robinhood’s customers the confidence to trade more, and create a sense of lock-in that stop them from straying to other brokerages that have dropped their per trade fees to zero to match the startup, like Charles Schwab, Ameritrade, and ETrade that was acquired for $13 billion today by Morgan Stanley, as reported by the Wall Street Journal.
The Profile features certainly sound helpful. They could reveal that your portfiolio is to centered around Tech, Media, and Telecom stocks, or that you’re ignoring cryptocurrency or corporations from your home state. Lists also makes it easier to track specific business verticals, save stocks to buy when you have the cash, or set aside some for deeper research. Robinhood pulls info from FactSet, Morningstar, and other trusted sources to figure out which stocks and ETFs go into sector lists, or you can make and name your own. Profiles and lists begin to roll out to all users next week.
But what’s most interesting is how profiles lay the foundation for Robinhood as a social network. It’s easy to imagine letting users follow other accounts or lists they create. The original Robinhood app let users make predictions like “17% increase in Facebook share price over the next 11 weeks” with comments to explain why. It showed users prediction accuracy, their average holding time for assets, a point score for smart foresight, community BUY or SELL ratings on stocks.
If Robinhood rebuilt some of these features, it might lessen the need for an expensive financial advisor or having enough cash to qualify for one with a different brokerage. Robinhood could let you crowdsource advice. “We understand the connotation of taking something from the rich and giving it to the poor. Robinhood is liberating information that’s locked up with professionals and giving it to the people” Robinhood co-founder and co-CEO Vlad Tenev told me back in 2013.
Robinhood would certainly need to be careful about scammy tips going viral. Improper safeguards could lead to pump and dump schemes where those late to buy in get screwed when prices snap back to reality.
But embracing social could leverage some of its strongest assets: the youthfulness of its userbase and the depth of connection to its users. The median age of a Robinhood customer is 30 and half say they’re first time investors. Being able to turn to friends or experts within the app might convince them to pull the trigger on trades.
Most online brokerages are somewhat undifferentiated beyond differences in pricing while their clunky, unstylized products don’t generate the same brand affinity as people have for Robinhood. Unsatisfied users could bail for a competitor at any time. Robinhood’s users are accustomed to social networking and the way it locks in users since they don’t want to abandon their community.
When I asked Robinhood Profiles’ product manager Shanthi Shanmugam directly about whether this was the start of more social trading features, they suspiciously dodged the question, telling me “When thinking about how to reflect who you are as an investor, we looked at how other apps represent you and it felt natural to leverage a design that felt more like a profile. When helping people group their investment ideas, it was easy to envision this as a playlist you might find on your favorite music app.”
That’s far from a denial. Offering social validation for trading could help Robinhood earn more from its customers despite their small total account balances. While Robinhood might have over 10 million accounts versus E-Trade’s 5.2 million and Morgan Stanley’s 3 million, but E-Trade’s average account size is $69,230 and Morgan Stanley’s is $900,000 while a survey found most of Robinhood’s held $1,000 to $5,000.
That all means that Robinhood earns less on interest sitting in users’ accounts than the old incumbents. But Robinhood earns the majority of its money on selling order flow and through its subscription Robinhood Gold feature that lets users pay monthly so they can borrow cash to trade with. Profiles and lists, and then eventually more social features, could get Robinhood’s users trading more so there’s more order flow to sell and more reason for them to buy subscriptions.
“Democratizing access is about lowering fees, minimums and other barriers people face — like confidence. Profiles and lists make finance easier to understand and more familiar for people” says Shanmugam. More social features built safely, more reassurance, more trading, more revenue. Robinhood has raised $910 million. But to outgun larger competitors like the newly assembled Morgan Stanley/E-Trade that’s matched its zero-fee pricing, Robinhood will have to win with product.
Hello and welcome back to our regular morning look at private companies, public markets and the gray space in between.
Today we’re living up to the introduction of this daily column by digging into the recently announced E-Trade sale and what its new price and recent financial performance can tell us about Robinhood, a startup competitor, and the unicorn’s valuation.
As always, when we’re comparing a fast-growing, private company in contrast to a larger, more mature, slower-growing, and profitable business, we’re working in broad strokes. But if we don’t take our contrasts too literally, we’ll be able to learn a thing or two.
After all, Robinhood is not only a richly-valued unicorn, it’s also a leading player in the burgeoning fintech and finservices startup niches, a sector we recently learned has capital flowing in at nearly record rates. So what we can learn about the value of Robinhood comps should prove illustrative and important.
We’ll start with an overview of the E-Trade sale, dig into its 2019 results and then compare the resulting multiples (with reasonable amounts of caveating, of course) to what we know about Robinhood. This will be fun!
Like practically everyone else in the mobile world, the cancellation of MWC on extremely short notice left HTC in the lurch. While the company has moved much of its efforts away from smartphones in general, the Taiwanese manufacturer has begun to use the show to showcase much of its VR efforts.
Along with the addition of a couple of key SKUs to the Cosmos lineup, HTC is also filling the MWC-shaped hole by showing off a concept headset. Project Proton is a far cry from the current bulky Vive headsets, more closely resembling a streamlined version of Magic Leap’s AR technology.
And while the device design certainly screams “concept,” HTC tells me that it does, indeed have working prototypes of the technology in its labs and that, “some version of this product is possible today.” Among other things, moving toward micro displays would help drive down the size and weight requirements for such a device. Though likely at first, it would also drive up the price, given the relatively limited use of the technology in consumer devices versus smartphone screens.
These are the sorts of compromises the company wants the community to consider as it voices its opinions on the product. There are two versions of the product being floated — an all-in-one, with all of the processing happening on board, and an “all-in-two,” which finds it tethered to a device like a smartphone. Asked why the company wasn’t considering a 5G streaming approach, I was told that HTC is essentially erring on the side of caution when it comes to concerns about cellular radiation and opting not to introduce a product where a 5G radio is essentially strapped to the wearer’s head.
The technology is also an outgrowth of HTC’s increased interest in mixed reality. Today also saw the introduction Cosmos XR. The tech involves a faceplate sporting passthrough cameras that essentially beam in a view of real-world surroundings. Rather than an overlayed image you’d see with more traditional AR (if you can call it that), it’s a composite image that allows for more opaque graphical imagery.
Currently the technology is being targeted at developers, both for making future AR content and to allow them the ability to see real-world tools like their keyboard while creating VR content. From there, the company is hoping to see a development of both gaming and enterprise AR and XR — which could include things like virtual meetings for remote workers.
Cosmos XR will arrive as both a standalone headset and a modular faceplate for existing Cosmos headsets. More info will be available at GDC.
The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.
Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.
Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.
Google, meanwhile, is in the process of dialling up its designs on the health space.
In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”
Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018 which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.
We’ve reached out to the Irish DPC — which is the lead privacy regulator for Google in the EU — to ask if it shares the EDPB’s concerns.
The latter’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.
It also says it intends to remain “vigilant in this and similar cases in the future”.
The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.
“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.
We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.
Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.
However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)
“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” it adds.
We’ve also reached out to the European Commission’s competition unit for a response to the EDPB’s statement.
Hold that tweet — and add another one.
Twitter is adding a new feature for mobile users to make it easier to link dispersed ‘shower thoughts’ together — and another thing styleee.
The feature lets you pull down as you’re composing a tweet to add to your previous tweet by creating a thread or seeing a ‘continue thread’ option.
Tapping on a three-dots menu brings up an interface of older tweets which you can link the new tweet to — to continue (or kick off) a thread.
The feature looks intended to encourage more threads (from #140 characters to #280 to infinity tweetstorms and beyond!).
It may also be intended to address the broken thread phenomenon which can still plague the information network service. Especially where users are discussing complex and/or nuanced topics. (And Twitter has said it wants to foster healthy conversations on its platform so…)
The shortcut offers an alternative for Twitter users to being organized enough to tweet a perfectly threaded series of thoughts in the first place (i.e. by using the ‘+’ option at the point of composing your tweetstorm).
It also does away with the need to go manually searching through your feed for the particular tweet you want to expand on and then hitting reply to add another.
No, it’s still not an edit button, per se. But, frankly, if you think Twitter is ever going to let you rewrite your existing tweets you should probably think longer before you hit ‘publish’ on your next one.
The ‘continue thread’ option could also be used as a de facto edit option — by letting users more easily append a correction to a preexisting tweet.
Now you can add a Tweet to one you already Tweeted, faster! pic.twitter.com/j3ktAN6t5o
— Twitter (@Twitter) February 19, 2020
Whether the feature will (generally) work as intended — to boost threads and reduce broken threads and make Twitter a less confusing place for newbs — remains to be seen.
Happily it looks like Twitter has thought about and closed off one potential misuse risk. We tested to see what would happen if you try to insert a new tweet into the middle of an existing tweetstorm — which would have had the potential to generate more confusion (i.e. if the thread logic got altered by the addition).
But instead of embedding the new tweet in the middle of the old thread it was added at the bottom as a supplement. So you just start a new thread at the bottom of your old thread.
Good job Jack.
TechCrunch’s Romain Dillet contributed to this report
Codeacademy, the New York-based online interactive platform that offers coding classes in a wide variety of programming languages, is a little like background noise; it’s been operating reliably since founder Zach Sims created the company while still a Columbia University student in 2011. It’s a brand that people know and that millions have used, but because it has grown steadily, without headline-making funding rounds — or, conversely, newsworthy layoffs — the 90-person company doesn’t routinely attract a lot of press attention.
That’s fine with Sims, who we spoke with last week following the most recent bout of bad publicity for Lambda School, a younger rival that has raised $48 million from investors, compared with the $42.5 million that Codecademy has raised over time. Sims says his company is continuing to chug along nicely.
The question, increasingly, is whether that’s ‘nice’ enough for VCs. Indeed, Codecademy — like a lot of startups right now — is in the awkward position of being a smart, solid, steadily but not massively growing business — which raises questions about its next steps.
The last time we’d spoken with Sims, roughly two years ago, Codecademy — which struggled for years with how to produce meaningful revenue — had recently launched two premium products. One of these, Codecademy Pro, helps users who are willing to spend $40 per month (or $240 per year) to learn the fundamentals of coding, as well as develop a deeper knowledge in up to 10 areas, including machine learning and data analysis. Sims says this has taken off, though he declined to share specifics.
A second offering, Codecademy Pro Intensive, that was designed to immerse learners from six to 10 weeks in either website development, programming or data science, has since been dropped.
Who are the company’s paid users? Sims says they tend to fall into one of two buckets: those who are learning a discrete skill set, perhaps to build a website in a pinch, and those who are gainfully employed but looking to climb the ladder or switch jobs and who see Codeacademy as a way to spend a couple of hours a week to develop the skills to get there. Roughly 60 percent are based in the U.S.; the rest are elsewhere, including in India and Brazil. (The need for coding skills “isn’t a U.S.-only phenomenon,” Sims notes.)
Sims suggests the payback on investment can be fairly quick, given Codecademy’s pricing. By way of comparison, some on-premise coding schools charge upwards of $20,000 a year — a big enough expense that in order to make themselves more accessible, they invite students to pay nothing upfront and instead collect a percentage of their salary once they find a job.
Naturally, because Codecademy largely lives online, so do occasional criticisms about its perceived shortcomings. One customer — a self-described computer science major — authored a thoughtful review in December, writing that “being a programmer is more than simply being able to memorize syntax.” While Codecademy has introduced “thousands to the fundamentals of computer science,” through “addictive bite-sized pieces that are easy to accomplish,” this person wrote that it falls short in helping cultivate a “coders’ mindset.”
Either way, enough people are finding value in Codecademy’s vast number of offerings that it recently reached an important milestone — it’s now cash-flow positive — having doubled it revenue last year.
Sims is understandably proud of this accomplishment, noting that “there are few [coding platforms] that are growing sustainably and profitably and generating cash that can be invested back into the business.”
Codecademy is enjoying the same tailwinds it has from the start, too. Though skepticism has grown around coding schools more broadly, the ability to design, shape, correct, and secure software will only grow more valuable. Receiving a related education that comes affordably remains an appealing proposition.
It’s a case the company is continuing to make for consumers and, we gather, more enterprises that are starting to offer Codecademy type classes to employees. Though Codecademy already sells classes in volume packs, Sims suggests that a big push in 2020 will involve tie-ups with companies that want to provide what it teaches as a perk.
Whether it intends to paint a picture for investors, too, is less clear. ( Sims declined to answer when we asked about fundraising more broadly.)
Certainly, follow-on rounds are growing harder to land, as described in our piece last week about “portfolio bloat.” The reason: VCs have raised so much money in recent years that they’re funneling it into new startups faster than ever. (They need to find the Next Big Thing to return all that capital.)
That’s leaving a lot of more steadily growing companies to fend for themselves for now.
What the end result will be is an open question. Codecademy’s cash-flow positive status gives it more time to wait on an answer.
PocketPills, which bills itself as the sole online pharmacy operating in Canada, has raised $7.35 million in new financing as it expands across the country.
Through partnerships with insurers like Pacific Blue Cross the company provides co-insurance reductions for prescriptions. “We have an option for you to come and join our platform just like any pharmacy,” says company co-founder and chief operating officer, Harj Samra.
Samra launched the company in 2018 with Raj Gulia, a fellow proprietor of pharmacies across Canada, and the serial entrepreneur and co-founder of RocketFuel Abhinav Gupta. After RocketFuel’s public offering, Gupta was toying with several ideas for direct to consumer companies when he was approached by Gulia and Samra.
Together the three men launched PocketPills to bring the online pharmacy model to Canada as a way to save money for insurers.
The problem for insurers is that the use of generic drugs in Canada lags behind that of the U.S., says Gupta. “The difference is quite substantial. The U.S is about 90% generic fill rate and in Canada that number is at 70%,” he says.
PocketPills covers everything that a regular Canadian pharmacy would outside of controlled substances and narcotics. The bulk of the company’s prescriptions to date are for medications for chronic conditions.
Now the company is looking to expand across the country, opening fulfillment locations in Nova Scotia and soon in Quebec.
To back that growth and continue its development, PocketPills turned to a large Canadian family office and the investment firm Waterbridge to finance its $7.35 million round.
“PocketPills is timed well for massive value creation in the Canadian health care industry through its technology innovations. It has captured a sweet spot at the intersection of cost (insurers and employers), convenience (patients) and care (chronic diseases),” said Manish Kheterpal, Managing Partner, WaterBridge Ventures, in a statement.
Teikametrics, a startup that helps retailers optimize their online ad spending, has raised $15 million in additional funding.
The company launched with the goal of helping Amazon sellers advertise more effectively. More recently, it launched a similar partnership with Walmart.
CEO Alasdair McLean-Foreman said that on both platforms, the startup’s Flywheel platform can improve the ad-buying process using retailer data about things like transactions, inventory and pricing.
McLean-Foreman praised Amazon for creating “an incredible closed loop” where “millions of consumers [are] meeting millions of suppliers across the long tail.” And of the other online platforms, he said Walmart is “the one that’s closest to parity.”
He added that by working with Teikametrics, retailers (whether they’re third-party sellers, or brands promoting products that Amazon and Walmart are selling themselves) can optimize their campaigns across both marketplaces, and eventually on other platforms as well.
McLean-Foreman added that the company will be launching products that go beyond advertising later this year. His vision is for Teikametrics to use that same data to create a retail “operating system” that optimize every aspect of a retailer’s business, including inventory and pricing.
“It’s about creating very simple solution to a very, very complicated problem that is much more dynamic and much more complicated than just the ads,” he said.
The Boston-headquartered startup raised a $10 million Series A in 2018. The new round was led by Jump Capital, with participation from Granite Point Capital, Jerry Hausman (an MIT econometrics professor who also serves as a scientific advisor) and Ed Baker (former head of growth at Facebook and Uber).
Teikametrics says it’s working with more than 3,000 brands, including Clarks, Razer, Power Practical, Zipline Ski and Mark Cuban’s Brands. It also recently hired former Amazon ad executive Srini Guddanti as its chief product officer.
Looking at the broader retail and advertising landscape, McLean-Foreman acknowledged, “AI is almost a buzzword,” but he argued, “We are actually AI-first. The product itself is automation, it is intelligent decision-making.”
He added, “Advertising is a huge lever to pull and a really good problem for AI to solve, but I’m super excited to apply those same AI components or solutions to an even bigger problem at the same time.”