Acorns, which helps millions of people invest their spare change in the stock market, has laid off between 50 to 70 people, TechCrunch has learned from multiple sources.
The Irvine, Calif.-based company would not confirm the total number of people laid off, but did confirm that there were cuts at the company as a result of broader business changes.
The news emerged days after the fintech company closed its Portland office earlier this week, one of four offices the company maintained. While Acorns offered Portland employees an opportunity to relocate to its Irvine headquarters, some roles were terminated as part of the relocation, the company said.
Employees laid off largely were members of Acorns’ support team. And the internal cuts are related to an external partnership with TaskUs, which out-sources customer care and support needs for other businesses. Acorns will bring on roughly 80 new TaskUs support roles in the next year, which the company said would grow its support team, just not its internal staff.
The internal Acorns support team will handle high-touch customer care situations via phone, while external roles will handle email support.
Beyond support roles, Acorns cut some people from various teams across the company.
Acorns has found unprecedented growth as the coronavirus brings new users into its world of investing and saving money. The company recently hit a milestone of 7 million sign-ups, continuing the trend that trading apps are benefiting from a down market.
At the same time, Acorns also launched a debit card that depends on users spending in order to make sense as a business product. Payment processing is a risky space to play in right now because consumer spending has nosedived due to shelter in place orders. It could be a weak spot for the company at the moment. Earlier today, Brex laid off 62 staff members, just one week after raising $150 million in venture capital money.
So, why does a company like Acorns, that is facing immense growth, need to do layoffs? Even if you’re winning right now, the pandemic and potential of an extended recession is forcing businesses to reevaluate the way they’re spending money. In Acorns’ case, it will have more headcount next year than it does right now. But dig a little deeper, and its choice to outsource roles and shut down an office means that growing right now can come at the cost of slimming down.
Investors in Acorns include PayPal, DST Global, Rakuten, Greycroft and Bain Capital.
In a statement posted to Facebook late Friday afternoon, Mark Zuckerberg offered up an explanation of why his company did not contextualize or remove posts from the accounts associated with President Donald Trump that appeared to incite violence against American citizens.
“We looked very closely at the post that discussed the protests in Minnesota to evaluate whether it violated our policies,” Zuckerberg wrote. “Our policy around incitement of violence allows discussion around state use of force, although I think today’s situation raises important questions about what potential limits of that discussion should be.”
Facebook’s position stands in sharp contrast to recent decisions made by Twitter, with the approval of its chief executive, Jack Dorsey, to screen a tweet from the President on Thursday night using a “public interest notice” that indicated the tweet violated its rules glorifying violence. The public interest notice replaces the substance of what Trump wrote, meaning a user has to actively click through to view the offending tweet.
Critics excoriated Facebook and its CEO for its decision to take a hands off approach to the dissemination of misinformation and potential incitements to violence published by accounts associated with the President and the White House. Some of the criticism has even come from among the company’s employees.
“I have to say I am finding the contortions we have to go through incredibly hard to stomach,” one employee, quoted by The Verge, wrote in a comment on Facebook’s internal message board. “All this points to a very high risk of a violent escalation and civil unrest in November and if we fail the test case here, history will not judge us kindly.”
Zuckerberg defended Facebook’s position saying that it would not take any action on the posts from the President because “we think people need to know if the government is planning to deploy force.”
Facebook’s chief executive also drew a sharp contrast between Facebook’s response to the controversy and that of Twitter, which has provided a fact check for one of the President’s tweets and hidden Thursday’s tweet behind a warning label for violating its policies on violence.
“Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician,” wrote Zuckerberg.
Twitter explained its decision in a statement. “This Tweet violates our policies regarding the glorification of violence based on the historical context of the last line, its connection to violence, and the risk it could inspire similar actions today,” the company said.
We have placed a public interest notice on this Tweet from @realdonaldtrump. https://twitter.com/realDonaldTrump/status/1266231100780744704 …
Donald J. Trump
@realDonaldTrumpReplying to @realDonaldTrump
….These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!
“We’ve taken action in the interest of preventing others from being inspired to commit violent acts, but have kept the Tweet on Twitter because it is important that the public still be able to see the Tweet given its relevance to ongoing matters of public importance,” the Twitter statement continued.
Perhaps, as Zuckerberg suggests, Facebook will have an opportunity to provide some answers to the questions around what the limits should be around allowing the state discussion of incitements to violence. For now, the company’s response only begs more questions.
A link to the full post from Zuckerberg follows below:
The U.S. has suffered from devastating wildfires over the last few years as global temperatures rise and weather patterns change, making the otherwise natural phenomenon especially unpredictable and severe. To help out, Stanford researchers have found a way to track and predict dry, at-risk areas using machine learning and satellite imagery.
Currently the way forests and scrublands are tested for susceptibility to wildfires is by manually collecting branches and foliage and testing their water content. It’s accurate and reliable, but obviously also quite labor intensive and difficult to scale.
Fortunately, other sources of data have recently become available. The European Space Agency’s Sentinel and Landsat satellites have amassed a trove of imagery of the Earth’s surface that, when carefully analyzed, could provide a secondary source for assessing wildfire risk — and one no one has to risk getting splinters for.
This isn’t the first attempt to make this kind of observation from orbital imagery, but previous efforts relied heavily on visual measurements that are “extremely site-specific,” meaning the analysis method differs greatly depending on the location. No splinters, but still hard to scale. The advance leveraged by the Stanford team is the Sentinel satellites’ “synthetic aperture radar,” which can pierce the forest canopy and image the surface below.
“One of our big breakthroughs was to look at a newer set of satellites that are using much longer wavelengths, which allows the observations to be sensitive to water much deeper into the forest canopy and be directly representative of the fuel moisture content,” said senior author of the paper, Stanford ecoydrologist Alexandra Konings, in a news release.
The team fed this new imagery, collected regularly since 2016, to a machine learning model along with the manual measurements made by the U.S. Forest Service. This lets the model “learn” what particular features of the imagery correlate with the ground-truth measurements.
They then tested the resulting AI agent (the term is employed loosely) by having it make predictions based on old data for which they already knew the answers. It was accurate, but most so in scrublands, one of the most common biomes of the American west and also one of the most susceptible to wildfires.
You can see the results of the project in this interactive map showing the model’s prediction of dryness at different periods all over the western part of the country. That’s not so much for firefighters as a validation of the approach — but the same model, given up to date data, can make predictions about the upcoming wildfire season that could help the authorities make more informed decisions about controlled burns, danger areas, and safety warnings.
The researchers’ work was published in the journal Remote Sensing of Environment.
“Today we’re restructuring the company to better align our priorities with this new reality, while simultaneously accelerating our product vision. With that, I have some very sad news to share. 62 people will be leaving Brex today,” the post reads.
The cuts come as Brex’s customer base itself is struggling to stay afloat amid COVID-19: high-growth startups. The trickle-down to Brex’s core business, which depends on its customers spending money, was thus expected.
Brex has already cut some customer credit limits to mitigate some of the exposure risk, The Information reported, and Dubugras confirmed. Customers say the credit limit cuts came without warning or notice.
Additionally, the company, launched in Brazil and graduated from Y Combinator, raised $150 million recently.
When TechCrunch talked to Dubugras about the latest fundraise, the co-founder said the capital was offensive, rather than defensive.
“I’m glad this round came together, but if it hadn’t, we would’ve been fine,” he said last week. “The capital is so we can play offensive while everyone else plays defensive.”
In the blog post, the co-founders wrote to former staffers.
“Please continue dreaming big and don’t lose the ambition that attracted you to Brex. Don’t let anything, not even a global pandemic, take that away from you. I wish we could give each one of you a hug, so instead I’ll end this message like I’d do it in Portuguese. Abraços, Pedro and Henrique.”
Those laid off will be provided with eight weeks of severance, their computer and equipment, and Brex will dedicate a part of its recruiting team to help find new opportunities for ex-staffers. Additionally, Brex is making adjustments to the equity cliff and has extended healthcare benefits through the end of 2020.
Brex has amassed $465 million in venture capital funding to date.
Sony has been dishing out details on the PlayStation 5 piece-by-piece, rather than dropping all of the details at one big mega event. First came word of the Holiday 2020 release window. Then came an overview of the specs — like that it’ll have a super fast solid state drive by default. Most recently, they showed off the controller. (The divvied up approach makes sense, really; with the ongoing pandemic preventing events like E3 and GDC from happening… why wouldn’t Sony work on their own schedule and make every aspect its own mini-spectacle?)
The next glimpse they give, it seems, will be of the first games coming to the console.
This morning Sony announced that they’ll be hosting a live-streamed event on June 4th at 1pm Pacific. In a blog post about the event, Sony Interactive CEO Jim Ryan clarifies the focus:
We’ve shared technical specifications and shown you the new DualSense wireless controller. But what is a launch without games?
That’s why I’m excited to share that we will soon give you a first look at the games you’ll be playing after PlayStation 5 launches this holiday.
Ryan also notes that the event should last roughly an hour, but doesn’t suggest how many different games that’ll cover.
In a video that managed to pull in millions of views, Epic Games recently gave a first look at its upcoming Unreal Engine 5 running on pre-release PS5 hardware. Given that video’s success, I’d imagine that Sony is pretty dang eager to keep the early looks coming.
Will we finally see the console hardware itself? That’s still unclear. Seeing as they’ve pieced just about everything else out, though, I’d bet they’re saving that one for an event a bit closer to launch.
When this editor first met Jeremy Conrad, it was in 2014, at the 8,000-square-foot former fish factory that was home to Lemnos, a hardware-focused venture firm that Conrad had cofounded three years earlier.
Conrad — who as a mechanical engineering undergrad at MIT worked on self driving cars, drones and satellites — was still excited about investing in hardware startups, having just closed a small new fund even while hardware was very unfashionable. One investment his team had made around that time was in Airware, a company that made subscription-based software for drones and attracted meaningful buzz and $118 million in venture funding before abruptly shutting down in 2018.
For his part, Conrad had already moved on, deciding in late 2017 that one of the many nascent teams that was camping out at Lemnos was on to a big idea relating the future of construction. Conrad didn’t have a background in real estate per se or even an earlier interest in the industry. But the “more I learned about it — not dissimilar to when I started Lemnos — It felt like there was a gap in the market, an opportunity that people were missing,” says Conrad from his home in San Francisco, where he has hunkered down throughout the COVID-19 crisis.
Enter Quartz, Conrad’s now 1.5-year-old, 14-person company, which quietly announced $7.75 million in Series A funding earlier this month, led by Baseline Ventures, with Felicis Ventures, Lemnos and Bloomberg Beta also participating.
What it’s selling to real estate developers, project managers and construction supervisors is really two things, which is safety and information. Using off-the-shelf hardware components that are reassembled in San Francisco and hardened (meaning secured to reduce vulnerabilities), the company incorporates its machine-learning software into this camera-based platform, then mounts the system onto cranes a construction sites. From there, the system streams 4K live feeds of what’s happening on the ground, while also making sense of the action.
Say dozens of concrete pouring trucks are expected on a construction site. The cameras, with their persistent view, can convey through a dashboard system whether and when the trucks have arrived and how many, says Conrad. It can determine how many people on are on a job site, and whether other deliveries have been made, even if not with a high degree of specificity. “We can’t say [to project managers] that 1,000 screws were delivered, but we can let them know whether the boxes they were expecting were delivered and where they were left,” he explains.
It’s an especially appealing proposition in the age of coronavirus, as the technology can help convey information that’s happening at a site that’s been shut down, or even how closely employees are gathered. Conrad says the technology also saves on time by providing information to those who might not otherwise be able to access it. Think of the developer who is on the 50th floor of the skyscraper he or she is building, or even the crane operator who is perhaps moving a two-ton object and has to rely on someone on the ground to deliver directions but can enjoy far more visibility with the aid of a multi-camera set-up.
Quartz, which today operates in California but is embarking on a nationwide rollout, was largely inspired by what Conrad was seeing in the world of self-driving. From sensors to self-perception systems, he knew the technologies would be even easier to deploy at construction sites, and he believed it could make them safer, too. Indeed, like cars, construction sites are astonishingly dangerous. According to the Occupational Safety and Health Administration, of the worker fatalities in private industry in 2018, more than 20% were in construction.
Conrad also saw an opportunity to take on established companies like Trimble, a 42-year-old, publicly traded, Sunnyvale, Ca.-based company that sells a portfolio of tools to the construction industry and charges top dollar for them, too. (Quartz is currently charging $2,000 per month per construction site for its series of cameras, their installation, a livestream and “lookback” data, though this may well rise at its adds additional features.)
It’s a big enough opportunity in fact, that Quartz is not alone in chasing it. Last summer, for example, Versatile, an Israeli-based startup with offices in San Francisco and New York City, raised $5.5 million in seed funding from Germany’s Robert Bosch Venture Capital and several other investors for a very similar platform, though it uses sensors mounted under the hook of a crane to provide information about what’s happening. Construction Dive, a media property that’s dedicated to the industry, highlights many other, similar and competitive startups in the space, too.
Still, Quartz has Conrad, who isn’t just any founding CEO. Not only does he have that background in engineering, but having founded a venture firm and spent years as an investor may serve him well, too. He thinks a lot about the payback period on its hardware, for example.
Unlike a lot of founders, he also says he loves the fundraising process. “I get the highest quality feedback from some of the smartest people I know, which really helps focus your vision,” says Conrad, who says that Quartz, which operates in California today, is now embarking on a nationwide rollout.
“When you talk with great VCs, they ask great questions. For me, it’s best free consulting you can get.”
The global spread of COVID-19 and resulting orders to shelter in place have hit retailers hard.
But while the present is largely bleak, preparing for the future has retailers adopting technologies faster than ever. Their resilience and innovation means retail will look and fee different when the world reopens.
We gathered four views on the future of retail from the TechCrunch team:
SpaceX had just conducted yet another static fire test of the Raptor engine in its Starship SN4 prototype launch vehicle on Friday when the test vehicle exploded on the test stand. This was the fourth static fire test of this engine on this prototype, so it’s unclear what went wrong vs. other static fire attempts.
This was a test in the development of Starship, a new spacecraft that SpaceX has been developing in Boca Chica, Florida. Eventually, the company hopes to use it to replace its Falcon 9 and Falcon Heavy rocket, but Starship is still very early in its development phase, whereas those vehicles are flight-proven, multiple times over.
SpaceX had just secured FAA approval to fly its Starship prototype for short, suborbital test flights earlier this week. The goal was to fly this SN4 prototype for short distances following static fire testing, but that clearly won’t be possible now, as the vehicle appears to have been completely destroyed in the explosion following Friday’s test, as you can see below in the stream from NASASpaceflight.com.
The explosion occurred around 1:49 PM local time in Texas, roughly two minutes after it had completed its engine test fire. We’ve reached out to SpaceX to find out more about the cause of today’s incident, and whether anyone was potentially hurt in the explosion. SpaceX typically takes plenty of safety precautions when running these tests, including ensuring the area is well clear of any personnel or other individuals.
This isn’t the first time one of SpaceX’s Starship prototypes has met a catastrophic end; a couple of previous test vehicles succumbed to pressure testing while being put through their paces. This is why space companies test frequently and stress test vehicles during development – to ensure that the final operational vehicles are incredibly safe and reliable when they need to be.
SpaceX is already working on additional prototypes, including assembling SN5 nearby in Boa Chica, so it’s likely to resume its testing program quickly once it can clear the test stand and move in the newest prototype. This is a completely separate endeavor from SpaceX’s work on the Commercial Crew program, so that historic first test launch with astronauts on board should proceed either Saturday or Sunday as planned, depending on weather.
Box CEO Aaron Levie has been working to change the software world for 15 years, but the pandemic has accelerated the move to cloud services much faster than anyone imagined. As he pointed out yesterday in an Extra Crunch Live interview, who would have thought three months ago that businesses like yoga and cooking classes would have moved online — but here we are.
Levie says we are just beginning to see the range of what’s possible because circumstances are forcing us to move to the cloud much faster than most businesses probably would have without the pandemic acting as a change agent.
“Overall, what we’re going to see is that anything that can become digital probably will be in a much more accelerated way than we’ve ever seen before,” Levie said.
Fellow TechCrunch reporter Jon Shieber and I spent an hour chatting with Levie about how digital transformation is accelerating in general, how Box is coping with that internally and externally, his advice for founders in an economic crisis and what life might be like when we return to our offices.
Our interview was broadcast on YouTube and we have included the embed below.
Just a note that Extra Crunch Live is our new virtual speaker series for Extra Crunch members. Folks can ask their own questions live during the chat, with past and future guests like Alexis Ohanian, Garry Tan, GGV’s Hans Tung and Jeff Richards, Eventbrite’s Julia Hartz and many, many more. You can check out the schedule here. If you’d like to submit a question during a live chat, please join Extra Crunch.
The way that we think about digital transformation is that much of the world has a whole bunch of processes and ways of working — ways of communicating and ways of collaborating where if those business processes or that way we worked were able to be done in digital forms or in the cloud, you’d actually be more productive, more secure and you’d be able to serve your customers better. You’d be able to automate more business processes.
We think we’re [in] an environment that anything that can be digitized probably will be. Certainly as this pandemic has reinforced, we have way too many manual processes in businesses. We have way too slow ways of working together and collaborating. And we know that we’re going to move more and more of that to digital platforms.
In some cases, it’s simple, like moving to being able to do video conferences and being able to collaborate virtually. Some of it will become more advanced. How do I begin to automate things like client onboarding processes or doing research in a life sciences organization or delivering telemedicine digitally, but overall, what we’re going to see is that anything that can become digital probably will be in a much more accelerated way than we’ve ever seen before.
Audi has created a new business unit called Artemis to bring electric vehicles equipped with highly automated driving systems and other tech to market faster — the latest bid by the German automaker to become more agile and competitive.
The traditional automotive industry, where the design to start of production cycle might take five to seven years, has been grappling with how to bring new and innovative products to market more quickly to meet consumers’ fickle demands. The model is more akin to how Tesla or a consumer electronics company operates.
The first project under Artemis will be to “develop a pioneering model for Audi quickly and unbureaucratically,” Audi AG CEO Markus Duesmann said in a statement Friday. The unit is aiming to design and produce what Audi describes as a “highly efficient electric car” as early as 2024.
Artemis will be led by Alex Hitzinger, who was in charge of Audi’s Autonomous Intelligent Driving, the self-driving subsidiary that was launched just in 2017 to develop autonomous vehicle technology for the VW Group. AID was absorbed into the European headquarters of Argo AI, a move that was made after VW invested $2.6 billion in capital and assets into the self-driving startup.
Hitzinger, who takes the new position beginning June 1, will report directly to Duesmann. Artemis will be based at the company’s tech hub of its INCampus in Ingolstadt, Germany.
Artemis is under the Audi banner. However, the aim is for this group’s work to benefit brands under its parent company VW Group. Hitzinger and the rest of his team will have access to resources and technologies within the entire Volkswagen Group . For instance, Car.Software, an independent business unit under the VW Group, will provide digital services to Artemis. The upshot: to create a blueprint that will make VW Group a more agile automaker able to bring new and technologically advanced vehicles to market more quickly.
VW Group plans to produce and sell 75 electric vehicle models across its brands by 2029, a group that includes VW passenger cars and Audi. The creation of Artemis hasn’t changed Audi’s plans to produce 20 new all-electric vehicles and 10 new plug-in hybrids by 2025.
“The obvious question was how we could implement additional high-tech benchmarks without jeopardizing the manageability of existing projects, and at the same time utilize new opportunities in the markets,” Duesmann said.
In this installment of our ongoing series around making the most of your at-home video setup, we’re going to focus on one of the most important, but least well-understood or implemented parts of the equation: Lighting. While it isn’t actually something that requires a lot of training, expertise or even equipment to get right, it’s probably the number-one culprit for subpar video quality on most conference calls — and it can mean the difference between looking like someone who knows what they talk about, and someone who might not inspire too much confidence on seminars, speaking gigs and remote broadcast appearances.
You can make a very big improvement in your lighting with just a little work, and without spending any money. The secret is all in being aware of your surroundings and optimizing your camera placement relative to any light sources that might be present. Consider not only any ceiling lights or lamps in your room, but also natural light sources like windows.
Ideally, you should position yourself so that the source of brightest light is positioned behind your camera (and above it, if possible). You should also make sure that there aren’t any strong competing light sources behind you that might blow out the image. If you have a large window and it’s daytime, face the window with your back to a wall, for instance. And if you have a movable light or an overhead lamp, either move it so it’s behind and above your computer facing you, or move yourself if possible to achieve the same effect with a fixed-position light fixture, like a ceiling pendant.
Ideally, any bright light source should be positioned behind and slightly above your camera for best results.
Even if the light seems aggressively bright to you, it should make for an even, clear image on your webcam. Even though most webcams have auto-balancing software features that attempt to produce the best results regardless of lighting, they can only do so much, and especially lower-end camera hardware like the webcam built into MacBooks will benefit greatly from some physical lighting position optimization.
This is an example of what not to do: Having a bright light source behind you will make your face hard to see, and the background blown out.
The best way to step up beyond the basics is to learn some of the fundamentals of good video lighting. Again, this doesn’t necessarily require any purchases – it could be as simple as taking what you already have and using it in creative ways.
Beyond just the above advice about putting your strongest light source behind your camera pointed towards your face, you can get a little more sophisticated by adopting the principles of two- and three-point lighting. You don’t need special lights to make this work – you just need to use what you have available and place them for optimal effect.
A very basic, but effective video lighting setup involves positioning not just one, but two lights pointed towards your face behind, or parallel with your camera. Instead of putting them directly in line with your face, however, for maximum effect you can place them to either side, and angle them in towards you.
A simple representation of how to position lights for a proper two-point video lighting setup.
Note that if you can, it’s best to make one of these two lights brighter than the other. This will provide a subtle bit of shadow and depth to the lighting on your face, resulting in a more pleasing and professional look. As mentioned, it doesn’t really matter what kind of light you use, but it’s best to try to make sure that both are the same temperature (for ordinary household bulbs, how ‘soft,’ ‘bright’ or ‘warm’ they are) and if your lights are less powerful, try to position them closer in.
Similar to two-point lighting, but with a third light added positioned somewhere behind you. This extra light is used in broadcast interview lighting setups to provide a slight halo effect on the subject, which further helps separate you from the background, and provides a bit more depth and professional look. Ideally, you’d place this out of frame of your camera (you don’t want a big, bright light shining right into the lens) and off to the side, as indicated in the diagram below.
In a three-point lighting setup, you add a third light behind you to provide a bit more subject separation and pop.
If you’re looking to improve the flexibility of this kind of setup, a simple way to do that is by using light sources with Philips Hue bulbs. They can let you tune the temperature and brightness of your lights, together or individually, to get the most out of this kind of arrangement. Modern Hue bulbs might produce some weird flickering effects on your video depending on what framerate you’re using, but if you output your video at 30fps, that should address any problems there.
All lights can be used to improve your video lighting setup, but dedicated video lights will provide the best results. If you really plan on doing a bunch of video calls, virtual talks and streaming, you should consider investing in some purpose-built hardware to get even better results.
At the entry level, there are plenty of offerings on Amazon that work well and offer good value for money, including full lighting kits like this one from Neewer that offers everything you need for a two-point lighting setup in one package. These might seem intimidating if you’re new to lighting, but they’re extremely easy to set up, and really only require that you learn a bit about light temperature (as measured in kelvins) and how that affects the image output on your video capture device.
If you’re willing to invest a bit more money, you can get some better quality lights that include additional features including wifi connectivity and remote control. The best all-around video lights for home studio use that I’ve found are Elgato’s Key Lights. These come in two variants, Key Light and Key Light Air, which retail for $199.99 and $129.99 respectively. The Key Light is larger, offers brighter maximum output, and comes with a sturdier, heavy-duty clamp mount for attaching to tables and desks. The Key Light Air is smaller, more portable, puts out less light at max settings and comes with a tabletop stand with a weighted base.
Both versions of the Key Light offer light that you can tune form very warm white (2900K) to bright white (7000K) and connect to your wifi network for remote control, either from your computer or your mobile device. They easily work together with Elgato’s Stream Deck for hardware controls, too, and have highly adjustable brightness and plenty of mounting options – especially with extra accessories like the Multi-Mount extension kit.
With plenty of standard tripod mounts on each Key Light, high-quality durable construction and connected control features, these lights are the easiest to make work in whatever space you have available. The quality of the light they put out is also excellent, and they’re great for lighting pros and newbies alike since it’s very easy to tune them as needed to produce the effect you want.
Beyond subject lighting, you can look at different kinds of accent lighting to make your overall home studio more visually interesting or appealing. Again, there are a number of options here, but if you’re looking for something that also complements your home furnishings and won’t make your house look too much like a studio set, check out some of the more advanced versions of Hue’s connected lighting system.
The Hue Play light bar is a great accent light, for instance. You can pick up a two pack, which includes two of the full-color connected RGB lights. You’ll need a Hue hub for these to work, but you can also get a starter pack that includes two lights and the hub if you don’t have one yet. I like these because you can easily hide them behind cushions, chairs, or other furniture. They provide awesome uplight effects on light-colored walls, especially if you get rid of other ambient light (beyond your main video lights).
To really amplify the effect, consider pairing these up with something one the Philips Hue Signe floor or table lamps. The Signe series is a long LED light mounted to a weighted base that provide strong, even accent light with any color you choose. You can sync these with other Hue lights for a consistent look, or mix and max colors for different dynamic effects.
On video, this helps with subject/background separation, and just looks a lot more polished than a standard background, especially when paired with defocused effects when you’re using better quality cameras. As a side benefit, these lights can be synced to movie and video playback for when you’re consuming video, instead of producing it, for really cool home theater effects.
Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines. This week’s show took a break from regularly scheduled programming. Our co-host Alex Wilhelm, who usually leads us through the show, was on some much-deserved vacation, so Danny Crichton and Natasha Mascarenhas took the reigns and invited Floodgate Capital’s Iris Choi to join in on the fun. It’s Choi’s fourth time being on the podcast, which officially makes her our most tenured guest yet (in case the accomplished investor needs another bullet point on her bio page).
This week’s docket features scrappiness, a seed round and a Startup Battlefield alumnus.
Here’s what we chewed through:
And that was the show! Thanks to our producer Chris Gates for helping us put to this together, thanks to you all for listening in on this quirky episode, and thanks to Iris Choi for always bringing a fresh, candid perspective. Talk next week.
Facebook’s R&D group, NPE Team, is launching a new app for engaging fellow fans around live events, Venue. This is the third new app to launch just this week from Facebook’s internal team focused on experimenting with new concepts in social networking. With Venue, the company aims to offer a digital companion for live events, starting with this Sunday’s NASCAR race.
The new app appears to be a challenge to Twitter, which today serves as the de facto “second screen” for commenting on live events and engaging with fellow fans. On Twitter, fans often use hashtags to add their commentary to live events that can range from TV show premieres to sports competitions to major political happenings, like live-streamed congressional hearings or the “State of the Union” presidential address, for example.
Twitter’s in-house curation team also rounds up the highlights from major events (e.g.), which are quick summaries featuring notable tweets, video clips, photos, comments and more about an event or related news story.
While there are some similarities with Twitter, Facebook’s Venue takes a different approach to the second screen.
Instead of having everyone viewing the event constantly chiming in with their own thoughts and reactions, the commentators for a given event hosted in Venue will only include well-known personalities — like journalists, current or former athletes, or aspiring “fan-analysts.” The latter could include popular social media personalities, for example.
These commentators will provide their own takes on the event and pose interactive questions and polls for those watching. The event host may also open up short, constrained chats around specific moments during the event — but fan commentary isn’t the main focus of the app.
In addition, fans don’t stay glued to their phone during the entire event when using Venue. Instead, the app sends out a notification to users when there’s a new “moment” available in the app. These “moments” aren’t like Twitter’s summaries. They’re one of the short, digital opportunities where fans can participate.
Future NASCAR races will also be hosted in Venue, with commentators including nascarcasm, FOX Sports NASCAR reporter Alan Cavanna, and NASCAR driver Landon Cassill.
“As NASCAR makes its return to action over the coming weeks, Venue will provide users with a unique and exciting way to connect with fellow race fans from around the globe – all from the safety and comfort of their own homes,” said Tim Clark, NASCAR SVP and Chief Digital Officer, in a statement. “NASCAR was built on innovation, and we couldn’t be more excited to help a great partner like Facebook’s New Product Experimentation team innovate around new platforms,” he added.
Facebook believes the new app will give viewers the chance to better engage with live events and fellow fans.
“Live broadcasts still offer the rare opportunity for millions of people to consume content simultaneously,” Facebook explained in its announcement. “Despite drawing large concurrent viewership, live broadcasts are still a mostly solo viewing experience,” it noted.
That’s a bit of stretch. Fans certainly engage with one another when chatting about live events on Twitter. And when Twitter streams the video from a live event — something Venue doesn’t do, by the way — Twitter will offer a dedicated space where users can easily see the tweets from fellow viewers. Other live video platforms, including Facebook’s own Facebook Live and Instagram Live, also include chat experiences as do YouTube Live and Twitch.
The real difference between Venue and Twitter is that it shifts the balance of power. On Twitter, everyone’s comments are given equal footing. In Venue, it’s the expert hosts leading and curating the conversation.
Facebook hasn’t announced what future events Venue may host beyond NASCAR but it sounds like it has plans to expand Venue further down the road as it refers to NASCAR as its “first” sports partner.
The Venue app is live today on iOS and Android.
When Cisco bought AppDynamics in 2017 for $3.7 billion just before the IPO, the company sent a clear signal it wanted to move beyond its pure network hardware roots into the software monitoring side of the equation. Yesterday afternoon the company announced it intends to buy another monitoring company, this time snagging internet monitoring solution ThousandEyes.
Cisco would not comment on the price when asked by TechCrunch, but published reports from CNBC and others pegged the deal at around $1 billion. If that’s accurate, it means the company has paid around $4.7 billion for a pair of monitoring solutions companies.
Cisco’s Todd Nightingale, writing in a blog post announcing the deal said that the kind of data that ThousandEyes provides around internet user experience is more important than ever as internet connections have come under tremendous pressure with huge numbers of employees working from home.
ThousandEyes keeps watch on those connections and should fit in well with other Cisco monitoring technologies. “With thousands of agents deployed throughout the internet, ThousandEyes’ platform has an unprecedented understanding of the internet and grows more intelligent with every deployment, Nightingale wrote.
He added, “Cisco will incorporate ThousandEyes’ capabilities in our AppDynamics application intelligence portfolio to enhance visibility across the enterprise, internet and the cloud.”
As for ThousandEyes, co-founder and CEO Mohit Lad told a typical acquisition story. It was about growing faster inside the big corporation than it could on its own. “We decided to become part of Cisco because we saw the potential to do much more, much faster, and truly create a legacy for ThousandEyes,” Lad wrote.
It’s interesting to note that yesterday’s move, and the company’s larger acquisition strategy over the last decade is part of a broader move to software and services as a complement to its core networking hardware business.
Just yesterday, Synergy Research released its network switch and router revenue report and it wasn’t great. As companies have hunkered down during the pandemic, they have been buying much less network hardware, dropping the Q1 numbers to seven year low. That translated into a $1 billion less in overall revenue in this category, according to Synergy.
While Cisco owns the vast majority of the market, it obviously wants to keep moving into software services as a hedge against this shifting market. This deal simply builds on that approach.
ThousandEyes was founded in 2010 and raised over $110 million on a post valuation of $670 million as of February 2019, according to Pitchbook Data.
The race to automate vehicles on China’s roads is heating up. Didi, the Uber of China, announced this week an outsized investment of over $500 million in its freshly minted autonomous driving subsidiary. Leading the round — the single largest fundraising round in China’s autonomous driving sector — is its existing investor Softbank, the Japanese telecom giant and startup benefactor that has also backed Uber.
As China’s largest ride-hailing provider with mountains of traffic data, Didi clearly has an upper hand in developing robotaxis, which could help address driver shortage in the long term. But it was relatively late to the field. In 2018, Didi ranked eighth in kilometers of autonomous driving tests carried out in Beijing, far behind search giant Baidu which accounted for over 90% of the total mileage that year.
It’s since played aggressive catchup. Last August, it spun off its then three-year-old autonomous driving unit into an independent company to focus on R&D, building partnerships along the value chain, and promoting the futuristic technology to the government. The team now has a staff of 200 across its China and U.S. offices.
As an industry observer told me, “robotaxis will become a reality only when you have the necessary operational skills, technology and government support all in place.”
Didi is most famous for its operational efficiency, as facilitating safe and pleasant rides between drivers and passengers is no small feat. The company’s leadership hails from Alibaba’s legendary business-to-business sales team, also known as the “Alibaba Iron Army” for its ability in on-the-ground operation.
The autonomous segment can also benefit from Didi’s all-encompassing reach in the mobility industry. For instance, it’s working to leverage the parent company’s smart charging networks, fleet maintenance service and insurance programs for autonomous fleets.
The fresh capital will enable Didi’s autonomous business to improve safety — an area that became a focal point of the company after two deadly accidents — and efficiency through conducting R&D and road tests. The financing will also allow it to deepen industry cooperation and accelerate the deployment of robotaxi services in China and abroad.
Over the years, Didi has turned to traditional carmakers for synergies in what it dubs the “D-Alliance,” which counts more than 31 partners. It has applied autonomous driving technology to vehicles from Lincoln, Nissan, Volvo, BYD, to name a few.
Didi has secured open-road testing licenses in three major cities in China as well as California. It said last August that it aimed to begin picking up ride-hailing passengers with autonomous cars in Shanghai in a few months’ time. It’s accumulated 300,000 kilometers of road tests in China and the U.S. as of last August.
Belvo, a Latin American fintech startup which launched just 12 months ago, has already snagged funding from two of the biggest names in North and South American venture capital.
The company is aiming to expand the reach of its service that connects mobile applications in Mexico and Colombia to a customer’s banking information and now has some deep-pocketed investors to support its efforts.
If the business model sounds familiar, that’s because it is. Belvo is borrowing a page from the Plaid playbook. It’s a strategy that ultimately netted the U.S. startup and its investors $5.3 billion when it was acquired by Visa in January of this year.
Belvo and its backers, who funneled $10 million into the year-old company, want to replicate Plaid’s success and open up an entire new range of financial services companies in Latin America.
The round was co-led by Silicon Valley’s Founders Fund and Argentina’s Kaszek. With the new arsenal of capital complimented by the Founders Fund’s network and Kaszek’s deep knowledge of the Latin American market, Belvo hopes to triple its current team of 25 that is spread across operations in Mexico City and Barcelona.
Since its initial establishment in May 2019, the company has raised a total of $13 million from Y Combinator (W20) along with some of the biggest players in Latin America’s startup scene. Those investors include David Velez, the co-founder of Brazil’s multi-billion dollar lending startup, Nubank; MAYA Capital and Venture Friends.
The company’s co-founders, Pablo Viguera and Oriol Tintoré are no stranger to startups themselves. Viguera served as COO at European payments app Verse, and is a former general manager of one of the big European neo-banks, Revolut. Tintoré is a former NASA aerospace engineer, and while working for his Stanford MBA, founded Capella Space, an information collection startup that went on to raise over $50 million.
The company said it aims to work with leading fintechs in Latin America, spanning across verticals like the neobanks, credit providers and personal finance products Latin Americans use every day.
Belvo has built a developer-first API platform that can be used to access and interpret end-user financial data to build better, more efficient and more inclusive financial products in Latin America. Developers of popular neobank apps, credit providers and personal finance tools use Belvo’s API to connect bank accounts to their apps to unlock the power of open banking.
Viguera says the capital will be used to open a new office in Sao Paulo, and invest in new product and business development hires. Notably, Belvo is only one year old, having launched in January 2020 and operative in Mexico and Colombia.
Co-founders Pablo Viguera and Oriol Tintoré are a former Revolut GM and former NASA aerospace engineer.
Belvo’s latest funding also marks another instance of a U.S.-Latin America investment teamup for a Latin American company.
Nuvocargo, a logistics startup that wants to bolster the Mexico – U.S. trade lane with its freight transportation technology, also recently raised a round co-led by Mexico’s ALLVP and Silicon Valley-based NFX. American investors may be starting to take note of the co-investment opportunity of putting capital into startups serving the Latin American market in partnership with successful new wave domestic funds like Mexico’s ALLVP and Argentina’s Kaszek.
SpaceX has received authorization from the Federal Aviation Administration (FAA) to fly suborbital missions with its Starship prototype spacecraft, paving the way for test flights at its Boca Chica, Texas site. SpaceX has been hard at work readying its latest Starship prototype for low-altitude, short duration controlled flight tests, and conducted another static engine fire test of the fourth iteration of its in-development spacecraft earlier today.
Officially, the FAA has granted SpaceX permission to conduct what it terms “reusable launch vehicle” missions, which essentially means that the Starship prototype is now cleared to take-off from, and land back at, the launch site SpaceX operates in Boca Chica. The Elon Musk-led space company has already conducted similar tests, but previously used its ‘Starhopper’ early prototype, which was smaller than the planned production Starship, and much more rudimentary in design. It was basically used to prove out the capabilities of the Raptor engine that SpaceX will use to propel Starship, and only for a short hop test using one of those engines.
Since that flight last year, SpaceX has developed multiple iterations of a full-scale prototype of Starship, but thus far they haven’t gotten back to the point where they’re actively flying any of those. In fact, multiple iterations of the Starship prototype have succumbed during pressure testing – though SN4, the version currently being prepared for a test flight, has passed not only pressure tests, but also static test fires of its lone Raptor engine.
The plan now is to fly this one for a short ‘hop’ flight similar to the one conducted by Starhopper, with a maximum altitude of around 500 feet. Should that prove successful, the next version will be loaded with more Raptor engines, and attempt a high altitude test launch. SpaceX is quickly building newer version of Starship in succession even as it proceeds with testing the completed prototypes, in order to hopefully shorten the total timespan of its development.
There’s something of a clock that SpaceX is working against: It was one of three companies that received a contract award from NASA to develop and build a human lander for the agency’s Artemis program to return to the Moon. NASA aims to make that return trip happen by 2024, and while the contract doesn’t necessarily require that each provided have a lander ready in that timeframe, it’s definitely a goal, if only for bragging rights among the three contract awardees.
Edtech is booming, but a short while ago, many companies in the category were struggling to break through as mainstream offerings. Now, it seems like everyone is clamoring to get into the next seed-stage startup that has the phrase “remote learning” on its About page.
And so begins the normal cycle that occurs when a sector gets overheated — boom, bust and a reckoning. While we’re still in the early days of edtech’s revitalization, it isn’t a gold mine all around the world. Today, in the spirit of balance and history, I’ll present three bearish takes I’ve heard on edtech’s future.
“I think the dividing line there will be there are companies that have been around, that are a little more entrenched, and have good financial runway and can probably survive this cycle,” he said. “They have credibility and will probably get picked [by schools].” The newer companies, he said, might get stuck with adoption because they are at a high degree of risk, and might be giving out free licenses beyond their financial runway right now.
On Thursday, President Trump signed an executive order taking aim at the legal shield that internet companies rely on to protect them from liability for user-created content. That law, known as Section 230 of the Communications Decency Act is essential to large social platforms like Twitter, YouTube and Facebook, the kind of companies the president has long accused, without evidence, of engaging in anti-conservative censorship.
Trump was joined during the signing by Attorney General William Barr, who has previously expressed interest in stripping away or limiting the same legal protections. During the signing, Trump claimed that social media companies have “unchecked power” influenced by their “points of view.” Earlier in the day the president tweeted “This will be a Big Day for Social Media and FAIRNESS!”
On Tuesday, Twitter added warning labels to two tweets from the president that made false claims about vote-by-mail systems. The label, which did not hide the tweets or even actually outright call them false, pointed users toward a fact-checking page. The move enraged the president, who lashed out through tweets, and encouraged his followers to direct their ire toward Yoel Roth, Twitter’s head of site integrity.
The executive order is not yet published, but we examined a draft of it previously that is likely to bear a close resemblance to the finished copy. Civil rights groups and internet freedom watchdogs denounced the order Thursday, with the co-creator of the law in Trump’s crosshairs dismissing his actions as “plainly illegal.”
This story is developing