FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

YouTube’s new AR feature lets you virtually try on makeup while watching videos

By Sarah Perez

Earlier this summer, YouTube announced its plans for a new AR feature for virtual makeup try-on that works directly in the YouTube app. Today, the first official campaign to use the “Beauty Try-On” feature has now launched, allowing viewers to try on and shop lipsticks from MAC Cosmetics from YouTube creator Roxette Arisa’s makeup tutorial video.

Makeup tutorials are hugely popular on YouTube, so an integration where you can try on the suggested looks yourself makes a ton of sense. While a lipstick try-on feature isn’t exactly groundbreaking — plenty of social media apps offer a similar filter these days — it could lead to more complex AR makeup integrations further down the road.

The new AR feature only works when you’re watching the video from a mobile device, and the YouTube app is updated to the latest version.

Famebit MAC Shortened

Then, when watching the video, you’ll see a button that says “try it on” which will launch the camera in a split-screen view. The video will continue to play as you scroll through the various lipstick shades below, applying the different colors to see which one works best. Unlike some of the filters in social apps like Instagram and Snapchat, the colors are evenly aligned with your lips and not bleeding out the edges. The result is a very natural look.

Image from iOS 1MAC Cosmetics will work with creators through YouTube’s branded content division, Famebit. The program connects brands with YouTube influencers who then market their products as paid sponsorships.

MAC is the first partner for this AR feature, but more will likely follow.

Prior to launch, YouTube tested the AR Beauty Try-On with several beauty brands, and found that 30% of viewers chose to active the experience in the YouTube iOS.

Those who did were fairly engaged, spending more than 80 seconds trying on virtual lipstick shades.

Google is not the first company to offer virtual makeup try-on experiences. Beyond social media apps, there are also AR beauty apps like YouCam MakeupSephora’s Virtual ArtistUlta’s GLAMLab and others. L’Oréal also offers Live Try-On on its website, and had partnered with Facebook last year to bring virtual makeup to the site. In addition, Target’s online Beauty Studio offers virtual makeup across a number of brands and products.

YouTube’s implementation, however, is different because it’s not just a fun consumer product — it’s an AR-powered ad campaign.

Though some may scoff at the idea of virtual makeup, this market is massive. Millions watch makeup tutorials on YouTube every day, and the site has become the dominant source for referral traffic for beauty brands. In 2018, beauty-related content generated more than 169 billion views on the video platform.

You can watch the YouTube video here, or engage with the AR feature from the mobile YouTube app.

If you don’t see your face immediately after pressing the “try on” button, you probably need to update the YouTube app.

Inside Voyage’s plan to deliver a driverless future

By Kirsten Korosec

In two years, Voyage has gone from a tiny self-driving car upstart spun out of Udacity to a company able to operate on 200 miles of roads in retirement communities.

Now, Voyage is on the verge of introducing a new vehicle that is critical to its mission of launching a truly driverless ride-hailing service. (Human safety drivers not included.)

This internal milestone, which Voyage CEO Oliver Cameron hinted at in a recent Medium post, went largely unnoticed. Voyage, after all, is just a 55-person speck of a startup in an industry, where the leading companies have amassed hundreds of engineers backed by war chests of $1 billion or more. Voyage has raised just $23.6 million from investors that include Khosla Ventures, CRV, Initialized Capital and the venture arm of Jaguar Land Rover.

Still, the die has yet to be cast in this burgeoning industry of autonomous vehicle technology. These are the middle-school years for autonomous vehicles — a time when size can be misinterpreted for maturity and change occurs in unpredictable bursts.

The upshot? It’s still unclear which companies will solve the technical and business puzzles of autonomous vehicles. There will be companies that successfully launch robotaxis and still fail to turn their service into a profitable commercial enterprise. And there will be operationally savvy companies that fail to develop and validate the technology to a point where human drivers can be removed.

Voyage wants to unlock both.

Crowded field

Nate Mitchell Exits Facebook, Taking Oculus Era With Him

By Peter Rubin
The executive, in announcing his departure, was the last of the Oculus founders still at the company.

Google launches ‘Live View’ AR walking directions for Google Maps

By Darrell Etherington

Google is launching a beta of its augmented reality walking directions feature for Google Maps, with a broader launch that will be available to all iOS and Android devices that have system-level support for AR. On iOS, that means ARKit-compatible devices, and on Android, that means any smartphones that support Google’s ARcore, so long as ‘Street View’ is also available where you are.

Originally revealed earlier this year, Google Maps’ augmented reality feature has been available in an early alpha mode to both Google Pixel users and to Google Maps Local Guides, but starting today it’ll be rolling out to everyone (this might take a couple weeks depending on when you actually get pushed the update). We took a look at some of the features available with the early version in March, and it sounds like the version today should be pretty similar, including the ability to just tap on any location nearby in Maps, tap the ‘Directions’ button and then navigating to ‘Walking,’ then tapping ‘Live View’ which should appear newer the bottom of the screen.

Live View
The Live View feature isn’t designed with the idea that you’ll hold up your phone continually as you walk – instead, in provides quick, easy and super useful orientation, by showing you arrows and big, readable street markers overlaid on the real scene in front of you. That makes it much, much easier to orient yourself in unfamiliar settings, which is hugely beneficial when traveling in unfamiliar territory.

Google Maps is also getting a number of other upgrades, including a one-stop ‘Reservations’ tab in Maps for all your stored flights, hotel stays and more – plus it’s backed up offline. This, and a new redesigned Timeline which is airing on Android devices only for now, should also be rolling out to everyone over the next few weeks.

The Note’s most impressive new feature is only available on the 10+

By Brian Heater

The new Note’s 3D scanning feature got what may well have been the loudest applause line of today’s big Samsung event. It’s an impressive feature for sure, but it’s the kind with little real world value at the moment — and it’s only available on the pricier Note 10+. Understandable on the latter, at least.

After all, Samsung need some ways to distinguish the more expensive unit. Aside from size and pricing, the 10+ also features a time of flight sensor missing on the standard Note. That brings an extra level of depth sensing. For now, uses for the feature are pretty limit. Take AR Doodle — that’s available on both versions of the device.

CMB 7347

3D scanning is an impressive differentiator, and the demo rightfully got some cheers as Samsung employ walked a circle around a stuffed beaver toy named “Billy” (I dunno, man). The phone did a solid job capturing the image in 3D and pulled it out of its background. From there, a users can sync its movements to their own and animate it, AR/Animoji-style.

[gallery ids="1865885,1865884,1865883,1865881,1865871,1865870,1865869"]

Again, a neat demo, but pretty limited real world use for most of us. Though that’s pretty standard for these sorts of features. It’s as much about showing that the company is thinking about AR and offering the hardware to do it. Making it truly useful, however, will be in the hands of developers.

Google is shutting down its Trips app

By Jonathan Shieber

Google is shutting down its Trips app for mobile phones, but is incorporating much of the functionality from the service into its Maps app and Search features, according to a statement from the company.

Support for the Trips app ends today, but information like notes and saved places will be available in Search as long as a user signs into their Google account.

To find attractions, events and popular places in a geography, users can search for “my trips” or go to the new-and-improved Travel page in Google.

Google announced changes to their Travel site in September 2018, which included many of the features that had been broken out into the Trips app. So now the focus will be on driving users back to Travel and to include more of the functionality in Google’s dominant mapping and navigation app.

Soon users will be able to add and edit notes from Google Trips in the Travel section on a browser and find saved attractions, flights and hotels for upcoming and past trips.

In Maps, searching a destination or finding specific iconic places, guide lists, events or restaurants can be done by swiping up on the “Explore” tab in the app.

Tapping the menu icon will now take users to places they’ve saved under the “Your Places” section. And soon the maps app will also include upcoming reservations organized by trip and those reservations will be available offline so a user won’t need to download them.

Screen Shot 2019 08 05 at 2.42.05 PM

 

Segment CEO Peter Reinhardt is coming to TechCrunch Sessions: Enterprise to discuss customer experience management

By Ron Miller

There are few topics as hot right now in the enterprise as customer experience management, that ability to collect detailed data about your customers, then deliver customized experiences based on what you have learned about them. To help understand the challenges companies face building this kind of experience, we are bringing Segment CEO Peter Reinhardt to TechCrunch Sessions: Enterprise on September 5 in San Francisco (p.s. early-bird sales end this Friday, August 9).

At the root of customer experience management is data — tons and tons of data. It may come from the customer journey through a website or app, basic information you know about the customer or the customer’s transaction history. It’s hundreds of signals and collecting that data in order to build the experience where Reinhardt’s company comes in.

Segment wants to provide the infrastructure to collect and understand all of that data. Once you have that in place, you can build data models and then develop applications that make use of the data to drive a better experience.

Reinhardt, and a panel that includes Qualtrics’ Julie Larson-Green and Adobe’s Amit Ahuja, will discuss with TechCrunch editors the difficulties companies face collecting all of that data to build a picture of the customer, then using it to deliver more meaningful experiences for them. See the full agenda here.

Segment was born in the proverbial dorm room at MIT when Reinhardt and his co-founders were students there. They have raised more than $280 million since inception. Customers include Atlassian, Bonobos, Instacart, Levis and Intuit .

Early-bird tickets to see Peter and our lineup of enterprise influencers at TC Sessions: Enterprise are on sale for just $249 when you book here; but hurry, prices go up by $100 after this Friday!

Are you an early-stage startup in the enterprise-tech space? Book a demo table for $2,000 and get in front of TechCrunch editors and future customers/investors. Each demo table comes with four tickets to enjoy the show.

Want to Know the Real Future of AR/VR? Ask Their Devs

By Peter Rubin
A new survey of 900 active devs provides some surprising clarity into the technology's constraints.

Cloud-based design tool Figma launches plug-ins

By Jordan Crook

Figma, the startup looking to put design tools in the cloud, has today announced new plugins for the platform that will help users clean up their workflows.

Figma cofounder and CEO Dylan Field says that plug-ins have been the most requested feature from users since the company’s launch. So, for the last year, the team has been working to build plug-in functionality on the back of Figma’s API (launched in March 2018) with three main priorities: stability, speed, and security.

The company has been testing plug-ins in beta for a while now, with 40 plug-ins approved at launch today.

Here are some of the standouts from launch today:

On the utility side, Rename It is a plug-in that allows designers to automatically rename and organize their layers as they work. Content Buddy, on the other hand, gives users the ability to add placeholder text (for things like phone numbers, names, etc.) that they can automatically find and replace later. Stark and ColorBlind are both accessibility plug-ins that help designers make sure their work meets the WCAG 2.0 contrast accessibility guidelines, and actually see their designs through the lens of eight different types of color vision deficiencies, respectively.

Other plug-ins allow for adding animation (Figmotion), changing themes (Themer), adding a Map to a design (Map Maker), and more.

Anyone can create plug-ins for public use on the Figma platform, but folks can also make private plug-ins for enterprise use, as well. For example, a Microsoft employee built a plug-in that automatically changes the theme of the design based on the various Microsoft products, such as Word, Outlook, etc.

microsoft themes final

Field says that the company currently has no plans to monetize plug-ins, which it says will be free to all. Rather, the addition of plug-ins to the platform is a move based on customer happiness and satisfaction. Moreover, Figma’s home on the web allows for the product to evolve more rapidly and in tune with customers. Rather than having to build each individual feature on its own, Figma can now open up the platform to its power users to build what they’d like into the web app.

Figma has raised a total of nearly $83 million since launch, according to Crunchbase. As of the company’s latest funding round ($40 million led by Sequoia six months ago), Figma was valued at $440 million post-funding.

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

By Neesha A. Tambe

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

Apple is hosting augmented reality art walking tours in major cities

By Lucas Matney

Apple is combining two long-standing major efforts in a new push, making AR more consumer-friendly and helping portray Apple Stores as civic centers where communities can come together.

The project, called [AR]T Walk, is a walking tour through various city centers around the globe aiming to help the digital art works of artists come alive in physical spaces. The tours are taking place in Hong Kong, London, New York, Paris, San Francisco and Tokyo through mid-August.

Showcasing digital art in geo-specific locations isn’t a new concept. In 2017, Snapchat debuted a partnership with Jeff Koons in Central Park, though the company had some issues with ensuring the tech worked reliably.

People looking to take part in the AR walking tours can sign up on Apple’s site. The tours seem to last a couple of hours and involve a 1.5-mile walk. The artists behind the work are Nick Cave, Nathalie Djurberg and Hans Berg, Cao Fei, John Giorno, Carsten Höller and Pipilotti Rist.

Facebook is exploring brain control for AR wearables

By Brian Heater

Facebook this morning issued a lengthy breakdown of recent research into BCI (brain-computer interface) as a means with which to control future augmented reality interfaces. The piece coincides with a Facebook-funded UCSF research paper published in Nature today entitled, “Real-time decoding of question-and-answer speech dialogue using human cortical activity.”

Elements of the research have fairly humane roots, as BCI technology could be used to assist people with conditions such as ALS (or Lou Gehrig’s disease), helping to communicate in ways that their body is no longer naturally able.

Accessibility could certainly continue to be an important case use for the technology, though Facebook appears to have its sights set on broader applications with the creation of AR wearables that eliminate the need for voice or typed commands.

“Today we’re sharing an update on our work to build a non-invasive wearable device that lets people type just by imagining what they want to say,” Facebook AR/VR VP Andrew “Boz” Bosworth said on Twitter. “Our progress shows real potential in how future inputs and interactions with AR glasses could one day look.”

“One day” appears to be a key aspect in all of this. A lot of the key caveats in all of this note that the technology is still on relatively distant horizon. “It could take a decade,” Facebook writes in the post, “but we think we can close the gap.”

Among the strategies the company is exploring is use of a pulse oximeter, monitoring neurons’ consumption of oxygen to detect brain activity. Again, that’s still a ways off.

Today we’re sharing an update on our work to build a non-invasive wearable device that lets people type just by imagining what they want to say. Our progress shows real potential in how future inputs and interactions with AR glasses could one day look. https://t.co/ilk192GwAR

— Boz (@boztank) July 30, 2019

“We don’t expect this system to solve the problem of input for AR anytime soon. It’s currently bulky, slow, and unreliable,” the company writes. “But the potential is significant, so we believe it’s worthwhile to keep improving this state-of-the-art technology over time. And while measuring oxygenation may never allow us to decode imagined sentences, being able to recognize even a handful of imagined commands, like ‘home,’ ‘select,’ and ‘delete,’ would provide entirely new ways of interacting with today’s VR systems — and tomorrow’s AR glasses.”

Obviously there are some red flags here for privacy advocates. There would be with any large tech company, but Facebook in particular presents lots of built in privacy and security concerns. Remember the uproar when it launched a smart screen with built-in camera and microphones? Now apply that to a platform that’s design to tap directly into your brain and you’ve got a good idea of what we’re dealing with here.

Facebook addresses this concern in passing in the piece.

“We can’t anticipate or solve all of the ethical issues associated with this technology on our own,” Facebook Reality Labs Research Director Mark Chevillet says in the piece. “What we can do is recognize when the technology has advanced beyond what people know is possible, and make sure that information is delivered back to the community. Neuroethical design is one of our program’s key pillars — we want to be transparent about what we’re working on so that people can tell us their concerns about this technology.”

Facebook seems intent on getting out in front of those concerns a decade or so ahead of time. Users have seemingly been comfortable giving away a lot of private information, as long as it’s been part of a slow, steady trickle. By 2029, maybe the notion of letting the social network plug directly into our grey matter won’t seem so crazy after all.

The Knight Foundation launches $750,000 initiative for immersive technology for the arts

By Jonathan Shieber

The John S. and James L. Knight Foundation is looking for pitches on how to enhance and augment traditional creative arts through immersive technologies.

Through a partnership with Microsoft the foundation is offering a share of a $750,00 pool of cash and the option of technical support from Microsoft, including mentoring in mixed-reality technologies and access to the company’s suite of mixed reality technologies.

“We’ve seen how immersive technologies can reach new audiences and engage existing audiences in new ways,” said Chris Barr, director for arts and technology innovation at Knight Foundation, in a statement. “But arts institutions need more knowledge to move beyond just experimenting with these technologies to becoming proficient in leveraging their full potential.”

Specifically, the foundation is looking for projects that will help engage new audiences; build new service models; expand access beyond the walls of arts institutions; and provide means to distribute immersive experiences to multiple locations, the foundation said in a statement.

“When done right, life-changing experiences can happen at the intersection of arts and technology,” said Victoria Rogers, Knight Foundation vice president for arts. “Our goal through this call is to help cultural institutions develop informed and refined practices for using new technologies, equipping them to better navigate and thrive in the digital age.”

Launched at the Gray Area Festival in San Francisco, the new initiative is part of the Foundation’s art and technology focus, which the organization said is designed to help arts institutions better meet changing audience expectations. Last year, the foundation invested $600,000 in twelve projects focused on using technology to help people engage with the arts.

“We’re incredibly excited to support this open call for ways in which technology can help art institutions engage new audiences,” says Mira Lane, Partner Director Ethics & Society at Microsoft. “We strongly believe that immersive technology can enhance the ability for richer experiences, deeper storytelling, and broader engagement.”

Here are the winners from the first $600,000 pool:

  • ArtsESP – Adrienne Arsht Center for the Performing Arts

Project lead: Nicole Keating | Miami | @ArshtCenter

Developing forecasting software that enables cultural institutions to make data-centered decisions in planning their seasons and events.

  • Exploring the Gallery Through Voice – Alley Interactive

Project lead: Tim Schwartz | New York | @alleyco@cooperhewitt@SinaBahram

Exploring how conversational interfaces, like Amazon Alexa, can provide remote audiences with access to an exhibition experience at Cooper Hewitt, Smithsonian Design Museum.

  • The Bass in VR – The Bass

Project lead: T.J. Black | Miami Beach | @TheBassMoA

Using 360-degree photography technology to capture and share the exhibit experience in an engaging, virtual way for remote audiences.

  • AR Enhanced Audio Tour – Crystal Bridges Museum of American Art

Project lead: Shane Richey | Bentonville, Arkansas | @crystalbridges

Developing mobile software to deliver immersive audio-only stories that museum visitors would experience when walking up to art for a closer look.

  • Smart Label Initiative – Eli and Edythe Broad Art Museum at Michigan State University

Project lead: Brian Kirschensteiner | East Lansing, Michigan | @msubroad

Creating a system of smart labels that combine ultra-thin touch displays and microcomputers to deliver interactive informational content about artwork to audiences.

  • Improving Arts Accessibility through Augmented Reality Technology – Institute on Disabilities at Temple University, in collaboration with People’s Light

Project lead: Lisa Sonnenborn | Philadelphia | @TempleUniv,@IODTempleU@peopleslight 

Making theater and performance art more accessible for the deaf, hard of hearing and non-English speaking communities by integrating augmented reality smart glasses with an open access smart captioning system to accompany live works.

  • ConcertCue – Massachusetts Institute of Technology (MIT); MIT Center for Art, Science & Technology

Project lead: Eran Egozy | Cambridge, Massachusetts | @EEgozy,@MIT,@ArtsatMIT@MIT_SHASS

Developing a mobile app for classical music audiences that receives real-time program notes at precisely-timed moments of a live musical performance.

  • Civic Portal – Monument Lab

Project lead: Paul Farber and Ken Lum | Philadelphia | @monument_lab@PennDesign@SachsArtsPhilly@paul_farber

Encouraging public input on new forms of historical monuments through a digital tool that allows users to identify locations, topics and create designs for potential public art and monuments in our cities.

  • Who’s Coming? – The Museum of Art and History at the McPherson Center

Project lead: Nina Simon | Santa Cruz, California | @santacruzmah@OFBYFOR_ALL

Prototyping a tool in the form of a smartphone/tablet app for cultural institutions to capture visitor demographic data, increasing knowledge on who is and who is not participating in programs.

  • Feedback Loop – Newport Art Museum, in collaboration with Work-Shop Design Studio

Project lead: Norah Diedrich | Newport, Rhode Island | @NewportArtMuse

Enabling audiences to share immediate feedback and reflections on art by designing hardware and software to test recording and sharing of audience thoughts.

  • The Traveling Stanzas Listening Wall – Wick Poetry Center at Kent State University Foundation

Project lead: David Hassler | Kent, Ohio | @DavidWickPoetry,@WickPoetry,@KentState@travelingstanza

Producing touchscreen installations in public locations that allow users to create and share poetry by reflecting on and responding to historical documents, oral histories, and multimedia stories about current events and community issues.

  • Wiki Art Depiction Explorer – Wikimedia District of Columbia, in collaboration with the Smithsonian Institution

Project lead: Andrew Lih | Washington, District of Columbia | @wikimedia@fuzheado

Using crowdsourcing methods to improve Wikipedia descriptions of artworks in major collections so people can better access and understand art virtually.

Unity, now valued at $6B, raising up to $525M

By Lucas Matney

Unity’s private valuation is climbing but it’s growing unclear when the company’s leadership is planning to take the 15-year-old gaming powerhouse to the public markets anytime soon.

The company announced today that is has received signed agreements from D1 Capital Partners, Canada Pension Plan Investment Board, Light Street Capital, Sequoia Capital, and Silver Lake Partners to fund a $525 million tender offer that will allow Unity’s common shareholders — the majority of which are early or current employees — to sell their shares in the company.

The tender offer gives employees “the opportunity for some liquidity,” Unity CFO Kim Jabal says. The total amount raised will depend on the enthusiasm of common shareholders to sell their stakes in Unity.

This event could potentially signify that the company is pushing back its timeline for an IPO, keeping employees that have been sitting on equity for several years happy as Unity labors on in private markets. It’s worth noting that the company has raised hundreds of million previously with the same intent of buying back employee shares.

It was reported earlier this year that Unity was targeting an IPO in the first half of 2020.

The company also confirmed that it wrapped up a $150M Series E funding round in May that doubled the company’s valuation to $6 billion. The announcement confirms the valuation we reported on in May though at a higher amount of capital raised.

SF-based Unity has more than 2,000 employees. The company builds developer tools which are used game studios to create video games across a number of platforms. The company claims that half of all games are created using the company’s game engine.

How top VCs view the new future of micromobility

By Arman Tabatabai

Earlier this month, TechCrunch held its annual Mobility Sessions event, where leading mobility-focused auto companies, startups, executives and thought leaders joined us to discuss all things autonomous vehicle technology, micromobility and electric vehicles.

Extra Crunch is offering members access to full transcripts key panels and conversations from the event, including our panel on micromobility where TechCrunch VC reporter Kate Clark was joined by investors Sarah Smith of Bain Capital Ventures, Michael Granoff of Maniv Mobility, and Ted Serbinski of TechStars Detroit.

The panelists walk through their mobility investment theses and how they’ve changed over the last few years. The group also compares the business models of scooters, e-bikes, e-motorcycles, rideshare and more, while discussing Uber and Lyft’s role in tomorrow’s mobility ecosystem.

Sarah Smith: It was very clear last summer, that there was essentially a near-vertical demand curve developing with consumer adoption of scooters. E-bikes had been around, but scooters, for Lime just to give you perspective, had only hit the road in February. So by the time we were really looking at things, they only had really six months of data. But we could look at the traction and the adoption, and really just what this was doing for consumers.

At the time, consumers had learned through Uber and Lyft and others that you can just grab your cell phone and press a button, and that equates to transportation. And then we see through the sharing economy like Airbnb, people don’t necessarily expect to own every single asset that they use throughout the day. So there’s this confluence of a lot of different consumer trends that suggested that this wasn’t just a fad. This wasn’t something that was going to go away.

For access to the full transcription below and for the opportunity to read through additional event transcripts and recaps, become a member of Extra Crunch. Learn more and try it for free. 

Kate Clark: One of the first panels of the day, I think we should take a moment to define mobility. As VCs in this space, how do you define this always-evolving sector?

Michael Granoff: Well, the way I like to put it is that there have been four eras in mobility. The first was walking and we did that for thousands of years. Then we harnessed animal power for thousands of years.

And then there was a date — and I saw Ken Washington from Ford here — September 1st, 1908, which was when the Model T came out. And through the next 100 years, mobility is really defined as the personally owned and operated individual operated internal combustion engine car.

And what’s interesting is to go exactly 100 years later, September 2008, the financial crisis that affects the auto industry tremendously, but also a time where we had the first third-party apps, and you had Waze and you had Uber, and then you had Lime and Bird, and so forth. And really, I think what we’re in now is the age of digital mobility and I think that’s what defines what this day is about.

Ted Serbinski: Yeah, I think just to add to that, I think mobility is the movement of people and goods. But that last part of digital mobility, I really look at the intersection of the physical and digital worlds. And it’s really that intersection, which is enabling all these new ways to move around.

GettyImages 1129827591

Image via Getty Images / Jackie Niam

Clark: So Ted you run TechStars Detroit, but it was once known as TechStars Mobility. So why did you decide to drop the mobility?

Serbinski: So I’m at a mobility conference, and we no longer call ourselves mobility. So five years ago, when we launched the mobility program at TechStars, we were working very closely with Ford’s group and at the time, five years ago, 2014, where it started with the connected car, auto and [people saying] “you should use the word mobility.”

And I was like “What does that mean?” And so when we launched TechStars Mobility, we got all this stuff but we were like “this isn’t what we’re looking for. What does this word mean?” And then Cruise gets acquired for a billion dollars. And everyone’s like “Mobility! This is the next big gold rush! Mobility, mobility, mobility!”

And because I invest early-stage companies anywhere in the world, what started to happen last year is we’d be going after a company and they’d say, “well, we’re not interested in your program. We’re not mobility.” And I’d be scratching my head like, “No, you are mobility. This is where the future is going. You’re this digital way of moving around. And no, we’re artificial intelligence, we’re robotics.”

And as we started talking to more and more entrepreneurs, and hundreds of startups around the world, it became pretty clear that the word mobility is actually becoming too limiting, depending on your vantage where you are in the world.

And so this year, we actually dropped the word mobility and we just call it TechStars Detroit, and it’s really just intersection of those physical and digital worlds. And so now we don’t have a word, but I think we found more mobility companies by dropping the word mobility.

Snap overtakes its IPO debut price

By Lucas Matney

Snap may no longer be the laughing stock of the New York Stock Exchange.

On the heels of renewed user growth and an earnings beat, Snap closed Wednesday with a share price at $17.60, up 18.68% for the day, giving the company its first close above its $17 IPO debut price since March of last year.

After a highly anticipated debut sent Snap’s share price climbing 44% on its first day of trading in March of 2017, the company’s stock soon plummeted as its first earnings report detailed slowed user growth that would continue for the next several periods. It was only a few months later that the company’s stock dipped below its $17 debut share price, a number it briefly rose above in early 2018 before sinking to an all-time low of $4.82 in late December.

The company’s earnings report yesterday may signify a turning point for the social media company that has reportedly struggled to retain executive and engineering talent in recent months in the face of rapidly declining investor enthusiasm. In the company’s Q2 earnings report, Snap executives highlighted their strengths as they highlighted a 13 million quarter-over-quarter increase in daily active users and a command over the 18-24 age bracket.

The key to maintaining that growth will be whether Snap can continue to deliver viral hits that bring users to the platform, like its augmented reality lenses that the company said contributed 7-9 million of the new users that came aboard last quarter.

Wednesday’s rally will give Snap more breathing room to pursue its original content strategy and its more ambitious efforts, like its game development and augmented reality platforms.

Inside the GM factory where Cruise’s autonomous Bolt is made

By Kirsten Korosec

TechCrunch took a field trip to GM’s Orion Assembly plant in Michigan to get an up-close view of how this factory has evolved since the 1980s.

What we found at the plant that employs 1,100 people is an unusual sight: a batch of Cruise autonomous vehicles produced on the same line — and sandwiched in between — the Bolt electric vehicle and an internal combustion engine compact sedan, the Chevrolet Sonic.

This inside look at how autonomous vehicles are built is just one of the topics coming up at TC Sessions: Mobility, which kicked off July 10 in San Jose. The inaugural event is digging in to the present and future of transportation, from the onslaught of scooters and electric mobility to autonomous vehicle tech and even flying cars.

Gender & compensation at VC-backed startups – Where are we today?

By Arman Tabatabai
Conrad Lee Contributor
Conrad Lee leads the VC Executive Compensation Survey and Option Impact at Shareworks Compensation, formerly Advanced-HR. He has spent the last decade developing and delivering the world’s largest pre-IPO compensation database for startups and their investors.

Compensation is the most intimate way a company can interact with its employees. For far too long, compensation managers and committees have operated behind closed doors, keeping pay guidelines shrouded in mystery. Developers with equal experience, performing at the same level, and huddled around the same table while trying to perfect autonomous ocean to table omakase experiences could receive drastically different pay packages. Those times are over.

Employment sits at historic lows, investors are pouring in money through massive rounds, and companies are stepping on, over, and around each other to attract the best talent. Silicon Valley sits at the epicenter of competitive labor markets, but we’ve heard the same story over and over: Big Company X is coming to town, and we can’t pay like them.

Heads up Seattle, Austin, Boulder, Boston, New York, Chicago, and most recently, Virginia! Recruiters must be aggressive, and it’s only a matter of time before an all-star employee mentions a 25% pay bump available at Company X. A team member hears the news and they’re suddenly browsing job boards as well. The dreaded churn switch is pushed a notch higher.

Today’s workforce is more connected than ever, having grown with technology since the days of Tetris, Shufflepuck, and Oregon Trail. What was once taboo to share with anyone beyond your significant other, is now being posted freely for the masses.

We won’t even start on the impacts of social media! Reviews and ratings began popping up for schools, restaurants, and workplaces. Glassdoor, Salary, and others provide deep insights to pay, work-life balance, and executive leadership approval ratings.

Then, things went a step further by detailing gender alongside compensation, most notably in the employee-led survey at Google in 2017. It was the shot heard round the world. How could a well-known organization which prides itself on diversity, and that some think is the entire internet, find itself with gender pay disparity?

Over the past year, I’ve visited and revisited the gender pay gap with various talent partners at prominent venture firms. Kelly Kinnard of Battery Ventures and Bethany Crystal of USV authored pieces on the topic. One theme was common when discussing pay disparity – What if we had real data? What if we had corporate-sourced data that wasn’t subject to disgruntled employees or selective reporting? Well, we do.

Advanced-HR hosts the world’s largest compensation database specific to venture-backed companies. For the first time, we took a deep dive into compensation and gender at privately held, VC-backed companies and we’re sharing the findings.

Thousands of companies and 10,000+ corporate-sourced employee data points. Nothing inferred. Though we analyzed the entire data set, this article only considers US Company data.

We do not display gender-based compensation data but VC-backed companies can access our database of 2800+ participants for free by completing a quick survey. Venture firms and all others interested in our data, contact us here.

About the data

Each year, we have the privilege of running the industry standard VC Executive Compensation Survey alongside 160+ top venture firms. All sponsoring firms and their participating portfolio companies receive the final report of detailed, aggregate, and anonymous compensation data. Before we review compensation, let’s visit gender representation at VC-backed companies.

The following slide is part of a more comprehensive 11-slide deck viewable at the end of this article, highlighting takeaways and key findings from our data.

6 Seed Stage CEO Compensation 1

Data is great. Now what?

It’s the hot topic and hiring managers are on red alert. Pay fairly or risk a PR nightmare. Here are some steps you may want to consider.

1. Founders need to hire. Owning the hiring process allows founders to gain valuable experience and exposure. By creating job descriptions, founders can be thoughtful and sensitive to the fact that connotations and tone can unintentionally isolate a specific segment of eligible talent.

The next frontier for sneakerheads is trying on shoes virtually

By Lucas Matney

Startups in the AR space see a bright future for bringing retail experiences into the home, but there haven’t been a ton of convincing examples of companies carrying out this vision effectively. Wannaby is using AR to help sneaker heads visualize their next purchase by letting them try on the shoes virtually.

The company launched its own app Wanna Kicks earlier this year where users can “try on” high quality 3D models from Nike, Adidas, Allbirds and others. The startup just launched a partnership with Gucci this morning to help consumers try on shoes inside the luxury brand’s dedicated app.

Users launch the feature in the Gucci app, bringing up a camera which leverages Wannaby’s AR tech to position the shoes proportionally on their feet. Snap a photo and you can get a direct link to Gucci’s website where you can drop a few hundred dollars on the shoes you were just “wearing.”

Gucci is just the tip of the iceberg for the AR startup based in Minsk. The 22-person team is hoping that its tech can blur the lines between shopping at home and going to a retail store. The company closed a $2.1 million seed round last year from local firm Bulba Ventures.

As sneaker marketplaces like StockX and GOAT gain in popularity among streetwear enthusiasts, there’s a lot of potential to make advances in helping users visualize purchases. GOAT, which acts as a middleman between resellers verifying the authenticity of kicks, currently uses AR to help consumers see every angle of the shoe, but hasn’t integrated tech like Wannaby’s to take the try-on process further.

While there aren’t too many other AR platforms focused on foot-tracking, there is the danger that a Google or Apple drives to the hoop and integrates that tech into its platform enabling apps to do it themselves, but Wannaby’s CEO says that their product is about content and implementation as much as it is about foot-tracking.

“[These big platforms are] definitely a threat to some extent, but there’s more than just the technology,” CEO Sergey Arkhangelskiy.

Right now, Wannaby is largely working off of flat-rate licensing fees, but the startup is hoping that it can grab a slice of sales in the future if consumers spend time trying on a pair of AR shoes before they make a purchase.

The startup is obviously focused on a tight market, but Arkhangelskiy points to the acquisition of AR makeup app Modiface by L’Oreal as a sign that individual retailers are looking to build a closer relationship between smartphone users and their products using whatever tech they have available to them. Arkhangelskiy also tells me that he has hopes this tech can strengthen in-store sales as well, allowing retailers to tease new products or allow visitors to “try-on” shoes that are no longer in stock.

How to negotiate term sheets with strategic investors

By Arman Tabatabai
Alex Gold Contributor
Alex Gold is co-founder of Myia, an intelligent health platform employing novel biometric data to predict and prevent costly medical events. Previously, Alex was Venture Partner at BCG Digital Ventures and a co-founder of Traction, a marketplace of digital marketing experts.

Three years ago, I met with a founder who had raised a massive seed round at a valuation that was at least five times the market rate. I asked what firm made the investment.

She said it was not a traditional venture firm, but rather a strategic investor that not only had no ties to her space but also had no prior investment experience. The strategic investor, she said, was looking to “get their hands dirty” and “get in on the ground floor.”

Over the next 2 years, I kept a close eye on the founder. Although she had enough capital to pivot her business focus multiple times, she seemed to be at odds, serving the needs of her strategic investor and her customer base.

Ultimately, when the business needed more capital to survive, the strategic investor didn’t agree with the founder’s focus, opted not to prop it up, and the business had to shut down.

Sadly, this is not an uncommon story as examples abound of strategic investors influencing startup direction and management decisions to the point of harm for the startup. Corporate strategics, not to be confused with dedicated funds focused on financial returns like a traditional venture investor like Google Ventures, often care less about return on investment, and more about a startup’s focus, and sector specificity. If corporate imperatives change, the strategic may cease to be the right partner or could push the startup in a challenging direction.

And yet, fortunately, as the disruptive power of technology is being unleashed on nearly every major industry, strategic investors are now getting smarter, both in terms of how they invest and how they partner with entrepreneurs.

From making strong acquisitive plays (i.e. GM’s purchase of Cruise Automation or Toyota’s early-stage investment in Uber) to building dedicated funds, to executing commercial agreements in tandem with capital investment, strategics are getting savvier, and by extension, becoming better partners.  In some instances, they may be the best partner.

Negotiating a term sheet with a strategic investor necessitates a different set of considerations. Namely: the preference for a strategic to facilitate commercial milestones for the startup, a cautious approach to avoid the “over-valuation” trap, an acute focus on information rights, and the limitation of non-compete provisions.

❌