Between 2005 and 2018, the five biggest U.S. tech firms collectively spent more than half a billion dollars lobbying federal policymakers. But they shelled out even more in 2019: Facebook boosted its lobbying budget by 25%, while Amazon hiked its political outlay by 16%. Together, America’s biggest tech firms spent almost $64 million in a bid to shape federal policies.
Clearly, America’s tech giants feel they’re getting value for their money. But as CEO of Boundless, a 40-employee startup that doesn’t have millions of dollars to invest in political lobbying, I’m proposing another way. One of the things we care most about at Boundless is immigration. And while we’ve yet to convince Donald Trump and Stephen Miller that immigrants are a big part of what makes America great — hey, we’re working on it! — we’ve found that when you have a clear message and a clear mission, even a startup can make a big difference.
So how can scrappy tech companies make a splash in the current political climate? Here are some guiding principles we’ve learned.
You can’t make a difference if you don’t make some noise. A case in point: Boundless is spearheading the business community’s pushback against the U.S. Department of Homeland Security’s “public charge rule.” This sweeping immigration reform would preclude millions of people from obtaining U.S. visas and green cards — and therefore make it much harder for American businesses to hire global talent — based on a set of new, insurmountable standards. We’re doing that not by cutting checks to K Street but by using our own expertise, creativity and people skills — the very things that helped make our company a success in the first place.
By leveraging our unique strengths — including our own proprietary data — we’ve been able to put together a smart, business-focused amicus brief urging courts to strike down the public charge rule. And because we combine immigration-specific expertise with a real understanding of the issues that matter most to tech companies, we’ve been able to convince more than 100 other firms — such as Microsoft, Twitter, Warby Parker, Levi Strauss & Co. and Remitly — to cosign our amicus brief. Will that be enough to persuade the courts and steer federal policy in immigrants’ favor? The jury’s still out. But whatever happens, we take satisfaction in knowing that we’re doing everything we can on behalf of the entire immigrant community, not just our customers, in defense of a cause we’re passionate about.
Taking a stand is risky, but staying silent is a gamble, too: Consumers are increasingly socially conscious, and almost nine out of 10 said in one survey that they prefer to buy from brands that take active steps to support the causes they care about. It depends a bit on the issue, though. One survey found that trash-talking the president will win you brownie points from millennials but cost you support among Baby Boomers, for instance.
So pick your battles — but remember that media-savvy consumers can smell a phony a mile off. It’s important to choose causes you truly stand behind and then put your money where your mouth is. At Boundless, we do that by hiring a diverse workforce — not just immigrants, but also women (we’re over 60%), people of color (35%) and LGBTQ+ (15%) — and putting time and energy into helping them succeed. Figure out what authenticity looks like for your company, and make sure you’re living your values as well as just talking about them.
Tech giants might have a bigger megaphone, but there are a lot of startups in our country, and quantity has a quality all its own. In fact, the Small Business Administration reported in 2018 that there are 30.2 million small businesses in the United States, 414,000 of which are classified as “startups.” So instead of trying to shout louder, try forging connections with other smart, up-and-coming companies with unique voices and perspectives of their own.
At Boundless, we routinely reach out to the other startups that have received backing from our own investor groups — national networks such as Foundry Group, Trilogy Equity Partners, Pioneer Square Labs, Two Sigma Ventures and Flybridge Capital Partners — in the knowledge that these companies will share many of our values and be willing to listen to our ideas.
For startups, the venture capitalists, accelerators and incubators that helped you launch and grow can be an incredible resource: Leverage their expertise and Rolodexes to recruit a posse of like-minded startups and entrepreneurs that can serve as a force multiplier for your political activism. Instead of taking a stand as a single company, you could potentially rally dozens of companies — from a range of sectors and unique weights in their fields — on board for your advocacy efforts.
Every company has a few key superpowers, and the same things that make you a commercial success can help to sway policymakers, too. Boundless uses data and design to make the immigration process more straightforward, and number-crunching and messaging skills come in handy when we’re doing advocacy work, too.
Our data-driven report breaking down naturalization trends and wait times by location made a big splash, for instance, and not just in top-ranked Cleveland. We presented our findings to Congress, and soon afterward some Texas lawmakers began demanding reductions in wait times for would-be citizens. We can’t prove our advocacy was the deciding factor, but it’s likely that our study helped nudge them in the right direction.
Whether you’re Bill Gates or a small-business owner, if you’re quoted in The New York Times, then your voice will reach the same people. Reporters love to feel like they’re including quotes from the “little guy,” so make yourself accessible, and learn to give snappy, memorable quotes to reporters, and you’ll soon find that they keep you on speed dial.
Our phones rang off the hook when Trump tried to push through a healthcare mandate by executive order, for instance, and our founders were quoted by top media outlets — from Reuters to Rolling Stone. It takes a while to build media relationships and establish yourself as a credible source, but it’s a great way to win national attention for your advocacy.
To make a difference, you’ll need allies in the corridors of power. Reach out to your senators and congresspeople, and get to know their staffers, too. Working in politics is often thankless, and many aides love to hear from new voices, especially ones who are willing to stake out controversial positions on big issues, sound the alarm on bad policies or help move the Overton window to enable better solutions.
We’ve often found that prior to hearing from us, lawmakers simply hadn’t considered the special challenges faced by smaller tech companies, such as lack of internal legal, human and financial resources, to comply with various regulations. And those lawmakers come away from our meetings with a better understanding of the need to craft straightforward policies that won’t drown small businesses in red tape.
Political change doesn’t just happen in the Capital Beltway, so make a point of reaching out to your municipal and state-level leaders, too. In 2018, Boundless pitched to the Civic I/O Mayors Summit at SXSW because we knew that municipal leaders played a critical role in welcoming new Americans into our communities. Local policies and legislation can have a big impact on startups, and the support of local leaders remains a critical foundation for the kinds of change we want to see made to the U.S. immigration system.
It’s easy to make excuses or expect someone else to advocate on your behalf. But if there’s something you think the government could be doing better, then you have an obligation to use your company’s energy, talent and connections to push back and create momentum for reform. Sure, it would be nice to splash money around and hire a phalanx of lobbyists to shape public policy — but it’s perfectly possible to make a big difference without spending a dime.
But first, figure out what you stand for and what strengths and superpowers you can leverage to bear the problems you and your customers face. Above all, don’t be afraid to take a stand.
Eight years ago, Two Sigma Investments began an experiment in early stage investing.
The hedge fund, focused on data-driven quantitative investing, was well on its way to amassing the $60 billion in assets under management that it currently holds, but wanted more exposure to early stage technology companies, so it created a venture capital arm, Two Sigma Ventures.
Now, eight years and several investments later, the firm has raised $288 million in new funding from outside investors and is pushing to prove out its model, which leverages its parent company’s network of 1,700 data scientists, engineers and industry experts to support development inside its portfolio.
“The world is becoming awash in data and there’s continuing advances in the science of computing,” says Two Sigma Ventures co-founder Colin Beirne. “We thought eight years ago when when started, that more and more companies of the future would be tapping into those trends.”
Beirne describes the firm’s investment thesis as being centered on backing data-driven companies across any sector — from consumer technology companies like the social networking monitoring application, Bark, or the high performance, high-end sports wearable company, Whoop.
Alongside Beirne, Two Sigma Ventures is led by three other partners, Dan Abelon, who co-founded SpeedDate and sold it to IAC; Lindsey Gray, who launched and led NYU’s Entrepreneurial Institute; and Villi Iltchev, a former general partner at August Capital.
Recent investments in the firm’s portfolio include Firedome, an endpoint security company; NewtonX, which provides a database of experts; Radar, a location-based data analysis company; and Terray Therapeutics, which uses machine learning for drug discovery.
Other companies in the firm’s portfolio are farther afield. These include the New York-based Amper Music, which uses machine learning to make new music; and Zymergen, which uses machine learning and big data to identify genetic variations useful in pharmaceutical and industrial manufacturing.
Currently, the firm’s portfolio is divided between enterprise investments, consumer-facing deals, and healthcare focused technologies. The biggest bucket is enterprise software companies, which Beirne estimates represents about 65% of the portfolio. He expects the firm to become more active in healthcare investments going forward.
“We really think that the intersection of data and biology is going to change how healthcare is delivered,” Beirne says. “That looks dramatically different a decade from now.”
To seed the market for investments, the firm’s partners have also backed the Allen Institute’s investment fund for artificial intelligence startups.
Together with Sequoia, KPCB, and Madrona, Two Sigma recently invested in a $10 million financing to seed companies that are working with AI. “This is a strategic investment from partner capital,” says Beirne.
Typically startups can expect Two Sigma to invest between $5 million and $10 million with its initial commitment. The firm will commit up to roughly $15 million in its portfolio companies over time.
Octi has created a new social network that uses augmented reality to connect the act of seeing your friends in real life with viewing digital content like their favorite YouTube videos and Spotify songs.
When I wrote about the startup in 2018, it was building AR technology that could do a better job of recognizing the human body and movement. Last week, co-founder and CEO Justin Fuisz (pictured above) told me that this was “a really cool feature,” but that Octi’s investors pushed him “to do more, go deeper.”
Speaking of those investors, the startup says it’s now raised $12 million in funding (including a previously announced seed round of $7.5 million) from Live Nation, Anheuser-Busch InBev, Peter Diamandis’ Bold Capital Partners, Human Ventures, I2BF, Tom Conrad, Scott Belsky and Josh Kushner.
Last week, Fuisz demonstrated what he now sees as Octi’s “mic drop” moment — opening the new app and pointing his iPhone camera at a colleague. The app quickly recognized her, allowing Fuisz to send her a friend request. And once the request was accepted, could Fuisz look at her through the camera again, where she was surrounded by a floating “belt” of virtual items that she’d created with videos, songs and photos.
Octi also allows you to include fun effects and stickers. Your friends can change your profile too, making you wear a funny hat or giving you a rousing theme song for the day.
To create a facial recognition experience that’s fast and simple, Fuisz said that Octi’s powered by a “neural network on the edge,” allowing the app to process images on the device (rather than uploading them to the cloud) in a privacy-friendly way.
He said the company has taken other steps to optimize the process, like prioritizing friends-of-friends rather than searching through the faces of everyone in the network, resulting in an app that can identify a friend in as little as 20 milliseconds.
While Octi allows you to view friends’ profiles remotely, it’s worth emphasizing that the core experience is meant to be in-person. In fact, the company provided a statement from analyst Rich Greenfield in which he described the app as “an impressive technology that gives teens a compelling reason to be present and communicate with their phones, while gathered with their closest friends.”
I wondered whether a new social dynamic also provides new opportunities for harassment and bullying, but Fuisz noted that for now, Octi profiles and belts are only visible to friends that you’ve approved. So if one of your connections is doing something you don’t like, “You just say goodbye. That’s it. That’s a simple way of dealing with it.”
Fuisz added that this initial version provides a foundation for many more experiences: “There’s endless opportunity for games and other fun things you can do.”
Ultimately, he’s hoping to turn this into a WeChat-style platform for outside developers to build social tools and content. And since Octi works on iPhone 7 and above (with plans for an Android version later this year), it can potentially reach an enormous audience out of the gate, rather than facing the scale issues of a more specialized AR or VR hardware platform.
Cruise unveiled Tuesday evening a “production ready” driverless vehicle called Origin, the product of a multi-year collaboration with parent company GM and investor Honda that is designed for a ride-sharing service.
The shuttle-like vehicle — branded with Cruise’s trademark orange and black colors — has no steering wheel or pedals and is designed to travel at highway speeds. The interior is roomy with seats that face each other. Each seat is meant to accommodate the needs of an individual with personal USB ports, CTO and co-founder Kyle Vogt noted during the presentation. Digital displays are located above, presumably to give travelers information about their rides.
The doors don’t hinge outward, Vogt added. Instead, he said, “they slide open, so bikers are safer.”
CEO Dan Ammann stressed that the vehicle is not a concept, but instead is a production vehicle that the company intends to use for a ride-sharing service. However, don’t expect the Origin to be on public roads anytime soon. The driverless vehicle doesn’t meet U.S. federal regulations known as FMVSS, which specify design, construction, performance and durability requirements for motor vehicles.
For now, the Origin will be used on private, closed environments such as GM facilities in Michigan or even Honda’s campus outside of the U.S, Ammann said in an interview after the presentation.
Ammann also emphasized the low cost of the vehicle, which he added is designed to operate 1 million miles.
“We’ve been just as obsessed with making the Origin experience as inexpensive as possible,” Ammann said while on stage. “Because if we’re really serious about improving life, and our cities, we need huge numbers of people to use the Cruise origin. And that won’t happen unless we deliver on a very simple proposition, a better experience at a lower price than what you pay to get around today.”
Companies keep trying to make glassholes happen. Understandably. After the smartphone and the wrist, the face is the next local battlefield for computational space, if decades of science fiction movies have taught us anything. But we’ve seen the Google Glass, the Snapchat Spectacles, The Magic Leap, the whatever that thing that Samsung just semi-announced was.
Contact lenses have been mentioned in that same conversation for some time, as well, but technical limitations have placed the bar much higher than a heads-up display standard pair of spectacles. California-based Mojo Vision has been working on the breakthrough for a number of years now, and has a lofty sum to show for it, with $108 million in funding, including a $58 million Series B closed back in March.
The technology is compelling, certainly. I met with the team in a hotel suite at CES last week and got a walkthrough of some of the things they’ve been working on. While executives say they’ve been dogfooding the technology for some time now, the demos were still pretty far removed from an eventual in-eye augmented reality contact lens.
Rather, two separate demos essentially involved holding a lens or device close to my eye in order to get a feel for what an eventual product would look like. The reason was two-fold. First, most of the work is still being done off-device at the moment, while Mojo works to perfect a system that can exist within the confines of a contact while only needing to be charged once in a 25-hour cycle. Second, the issue of trying on a pair of contacts during a brief CES meeting.
I will say that I was impressed by the heads-up display capabilities. In the most basic demo, monochrome text resembling a digital clock is overlaid on images. Here, miles per hour are shown over videos of people running. The illusion has some depth to it, with the numbers appearing as though they’re a foot or so out.
In another demo, I donned an HTC Vive. Here I’m shown live video of the room around me (XR, if you will), with notifications. The system tracks eye movements, so you can focus on a tab to expand it for more information. It’s a far more graphical interface than the other example, with full calendars, weather forecasts and the like. You can easily envision how the addition of a broader color palette could give rise to some fairly complex AR imagery.
Mojo is using CES to announce its intentions to start life as a medical device. In fact, the FDA awarded the startup a Breakthrough Device Designation, meaning the technology will get special review priority from the government body. That’s coupled with a partnership with Bay Area-based Vista Center for the Blind and Visually Impaired.
That ought to give a good idea of Mojo’s go to market plans. Before selling itself as an AR-for-everyone device, the company is smartly going after visual impairments. It should occupy similar space as many of the “hearable” companies that have applied for medical device status to offer hearing-enhancing Bluetooth earbuds. Working with the FDA should go a ways toward helping fast-track the technology into optometrist offices.
The idea is to have them prescribed in a similar fashion as contact lenses, while added features like night vision will both aid people with visual impairments and potentially make those with better vision essentially bionic. You’ll go to a doctor, get prescribed, the contact lenses will be mailed to you and should last about the length of a normal pair. Obviously they’ll be pricier, of course, and questions about how much insurance companies will shell out still remain.
In their final state, the devices should last a full day, recharging in a cleaning case in a manner not dissimilar from AirPods (though those, sadly, don’t also clean the product). The lenses will have a small radio on-board to communicate with a device that hangs around the neck and relays information to and from a smartphone. I asked whether the plan was to eventually phase out the neck device, to which the company answered that, no, the plan was to phase out the smartphone. Fair play.
I also asked whether the company was working with a neurologist in addition to its existing medical staff. After 10 years of smartphone ubiquity, it seems we’re only starting to get clear data on how those devices impact things like sleep and mental well-being. I have to imagine that’s only going to be exacerbated by the feeling of having those notifications more or less beaming directly into your brain.
Did I mention that you can still see the display when your eyes are closed. Talk about a (pardon my French) mind fuck. There will surely be ways to silence or disable these things, but as someone who regularly falls asleep with his smartphone in-hand, I admit that I’m pretty weak when it comes to the issue of digital dependence. This feels like injecting that stuff directly into my veins, and I’m here for it, until I’m not.
We still have time. Mojo’s still working on the final product. And then it will need medical approval. Hopefully that’s enough time to more concretely answer some of these burning questions, but given how things like screen time have played out, I have some doubts on that front.
Stay tuned on all of the above. We’ll be following this one closely.
More than half a year after Google said Android phones could be used as a security key, the feature is coming to iPhones.
Google said it’ll bring the feature to iPhones in an effort to give at-risk users, like journalists and politicians, access to additional account and security safeguards, effectively removing the need to use a physical security key like a Yubico or a Google Titan key.
Two-factor authentication remains one of the best ways to protect online accounts. Typically it works by getting a code or a notification sent to your phone. By acting as an additional layer of security, it makes it far more difficult for even the most sophisticated and resource-backed attackers to break in. Hardware keys are even stronger. Google’s own data shows that security keys are the gold standard for two-factor authentication, surpassing other options, like a text message sent to your phone.
Google said it was bringing the technology to iPhones as part of an effort to give at-risk groups greater access to tools that secure their accounts, particularly in the run-up to the 2020 presidential election, where foreign interference remains a concern.
Fourteen startups presented on-stage today at Disrupt Berlin, giving live demos and rapid-fire presentations on their origin stories and business models, then answering questions from our expert judges.
Now, with the help of those judges, we’ve narrowed the group down to five startups working on everything from productivity to air pollution.
These finalists will be presenting again tomorrow (at 2pm Berlin time, viewable on the TechCrunch website or in-person at Disrupt) in front of a new set of judges. The winner will receive $50,000 and custody of the storied Disrupt Cup.
Here are the finalists:
Gmelius is building a workspace platform that lives inside Gmail, allowing teams to get more bespoke tools without adding yet another piece of software to their repertoire. It slots into the Gmail workspace, adding a host of features like shared inboxes, a help desk, an account-management solution and automation tools.
Hawa Dawa combines data sources like satellites and dedicated air monitoring stations to build a granular heat map of air pollutants, selling this map to cities and companies as a subscription API. While the company notes it’s hardware agnostic, it does build its own IoT sensors for companies and cities that might not have existing air quality sensors in place.
Inovat makes it much easier for travelers to get reimbursed for the value-added tax, through an app that employs optical character recognition and machine learning to interpret receipts, determine how much VAT you should be owed for your purchase, and prepare the requisite forms for submission online or to a customs officer.
Scaled Robotics has designed a robot that can produce 3D progress maps of construction sites in minutes, precise enough to detect that a beam is just a centimeter or two off. Supervisors can then use the software to check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas.
Stable offers a solution as simple as car insurance, designed to protect farmers around the world from pricing volatility. Through the startup, food buyers ranging from owners of a small smoothie shop to Coca-Cola employees can insure thousands of agricultural commodities, as well as packaging and energy products.
Few spaces are hotter right now than enterprise SaaS and tools that streamline communications in the workplace have been some of the more popular investments for savvy VCs.
The workplace email inbox is a nightmare, but there are plenty of wrong ways to kill it. Gmelius is building a workspace platform that lives inside Gmail, allowing teams to get more bespoke tools without adding yet another piece of software to their repertoire.
Gmelius slots into the Gmail workspace, adding a host of features like shared inboxes, a help desk, an account-management solution and automation tools. This integration fits into the existing interface while hiding heavier tools including Trello-like Kanban boards inside Gmail. It can understandably feel like an awful lot of functionality to fit into one app.
Gmelius has been around for a long time as a Chrome plugin, but the company has made significant strides this year to revamp their product as a premium tool for startups looking to supercharge their email and collaborate internally inside Gmail. The company still offers a free tier with limited functionality, but now has several professional tiers scaling up to a $49 per user enterprise plan.
The startup is rebelling against an overwhelming trend of service unbundling which has led startups to pay for more SaaS products than ever. Gmelius is taking on competitors like Front and others that have built out their own distinct platforms.
Gmelius recently graduated from Y Combinator and is in the midst of raising new funding to scale its team to take on more customers. They are competing in TechCrunch Disrupt Berlin’s Startup Battlefield.
Smart glasses maker North announced today that it will be ending production of its first-generation Focals glasses, which it brought to market for consumers last year. The company says it will instead shift its focus to Focals 2.0, a next-generation version of the product, which it says will ship starting in 2020.
Focals are North’s first product since rebranding the company from Thalmic Labs and pivoting from building smart gesture control hardware to glasses with a built-in heads-up display and smartphone connectivity. CEO and founder Stephen Lake told me in a prior interview that the company realized in developing its Myo gesture control armband that it was actually more pressing to develop the next major shift in computing platform before tackling interface devices for said platforms, hence the switch.
Focals 2.0 will be “at a completely different level” and “the most advanced smart glasses ever made,” Lake said in a press release announcing the new generation device. In terms of how exactly it’ll improve on the original, North isn’t sharing much but it has said that its made the 2.0 version both lighter and “sleeker,” and that it’ll offer a much sharper, “10x improved” built-in display.
North began selling its Focals smart glasses via physical showrooms that it opened first in Brooklyn and Toronto. These, in addition to a number of pop-up showroom locations that toured across North America, provided in-person try-ons and fittings for the smart glasses, which must be tailor-fit for individual users in order to properly display content from their supported applications. More recently, North also added a Showroom app for iOS devices, that included custom sizing powered by more recent iPhone front-facing depth sensing camera hardware.
To date, North hasn’t revealed any sales figures for its initial Focals device, but the company did reduce the price of the glasses form $999 to just under $600 (without prescription) relatively soon after launch. Their cost, combined with the requirement for an in-person fitting prior to purchase (until the introduction of the Showroom app) and certain gaps in the product feature set like an inability to support iMessage on iOS natively, all point to initial sales being relatively low volume, however.
To North’s credit, Focals are the first smart glasses hardware that manage to have a relatively inconspicuous look. Despite somewhat thicker than average arms on either side where the battery, projection and computing components are housed, Focals resemble thick acrylic plastic frames of the kind popularized by Warby Parker and other standard glasses makers.
With version 2.0, it sounds like Focals will be making even more progress in developing a design that hews closely to standard glasses. One of the issues also cited by some users with the first-generation product was a relatively fuzzy image produced by the built-in projector, which required specific calibration to remain in focus, and it sounds like they’re addressing that, too.
The Focals successor will still have an uphill battle when it comes to achieving mass appeal, however. It’s unlikely that cost will be significantly reduced, though any progress it can make on that front will definitely help. And it still either requires non-glasses wearers to opt for regularly donning specs, or for standard glasses wearers to be within the acceptable prescription range supported by the hardware, and to be willing to spend a bit more for connected glasses features.
The company says the reason it’s ending Focals 1.0 production is to focus on the 2.0 rollout, but it’s not a great sign that there will be a pause in between the two generations in terms of availability. Through its two iterations as a company, Thalmic Labs and now North have not had the best track record in terms of developing hardware that has been a success with potential customers – Focals 2.0, whenever they do arrive, will have a lot to prove in terms of iterating enough to drive significant demand.
Google is launching a number of updates to its G Suite tools today that, among other things, brings to Google Docs an AI grammar checker, smarter spellchecking and, soon, spelling autocorrect. The company is also launching the ability for G Suite users to use the Google Assistant to read out a calendar schedule and, maybe even more importantly, create, cancel and reschedule events. Google is also adding new accessibility features to the Assistant for use during meetings.
In addition, Google yesterday announced that Smart Compose would soon come to G Suite, too.
It’s maybe no surprise that Google is adding its new grammar suggestions to Docs. This feature, after all, is something Google has talked about quite a bit in recent months, after it first introduced it back in 2018. Unlike other grammar tools, Google’s version utilizes a neural network approach to detect potential grammar issues in your text, which is quite similar to the techniques used for building effective machine translation models.
Google is also bringing to Docs the same autocorrect feature it already uses in Gmail. This tool uses Google Search to learn new words over time, but in addition, Google today announced it’s also introducing a new system for offering users more customized spelling suggestions based on your documents. That includes commonly used acronyms that may be part of a company’s internal lingo.
The new Assistant calendaring features are now in beta and pretty self-explanatory. Indeed, it’s somewhat surprising that it took Google so long to offer these abilities. In addition to managing their calendar by voice, the company is now also making it possible to use the Assistant to send messages to meeting attendees and even join calls (“Hey Google, join my next meeting”). Surely that’s a handy feature when you’re once again running late to work and need to join an 8am call from your car while driving down the highway.
At its Cloud Next event in London, Google Cloud CEO Thomas Kurian today announced that Smart Compose, the AI-powered feature that currently tries to complete phrases and sentences for you in Gmail, is also coming to G Suite’s Google Docs soon. For now, though, your G Suite admin has to sign up for the beta to try it and it’s only available in English.
Google says in total, Smart Compose in Gmail already saves people from typing about 2 billion characters per week. At least in my own experience, it also works surprisingly well and has only gotten better since launch (as one would expect from a product that learns from the individual and collective behavior of its users). It remains to be seen how well this same technique works for longer texts, but even longer documents are often quite formulaic, so the algorithm should still work quite well there, too.
Google first announced Smart Compose in May 2018, as part of its I/O developer conference. It builds upon the same machine learning technology Google developed for its Smart Reply feature. The company then rolled out Smart Compose to all G Suite and private Gmail users, starting in July 2018, and later added support for mobile, too.
The future of the connected home, connected car and connected everything will have a lot of imaging technology at the center of it: sensors to track the movement of people and things will be a critical way for AI brains to figure out what to do next. Now, with a large swing toward more data protection — in part a reaction to the realization of just how much information about us is being picked up — we’re starting to see some interesting solutions emerge that can still provide that imaging piece, but with privacy in mind. Today one of the startups building such solutions is announcing a big round of funding.
Vayyar, an Israeli startup that builds radar-imaging chips and sensors, as well as the software that reads and interprets the resulting images used in automotive and IoT applications (among others) — providing accurate information about what is going on a specific place, even if it’s behind a wall or another object, but without the kind of granular detail that would actually be able to personally identify someone — has picked up a Series D of $109 million, money it will use to expand the range of applications it can cover and to double down on key markets like the U.S. and China.
From what I understand from sources close to the deal, this round is being done at a valuation “north” of $600 million, which is a big step up on the company’s valuation in its C-round in 2017, which was at around $245 million post-money, according to PitchBook data.
Part of the reason for the big multiple is because the company already has a number of big customers on its books, including the giant automotive supplier Valeo and what Raviv Melamed — Vayyar’s co-founder, CEO and chairman — described to me as a “major Silicon Valley company” working on using Vayyar’s technology in its smart home business.
I was going to write that the funding is notable for the large size, but it feels these days that $100 million is the new $50 million (which is to say, it’s becoming a lot more common to raise so much). What’s perhaps more distinctive is the source of the funding. This Series D is being led by Koch Disruptive Technologies, with Regal Four (an investment partner of KDT) and existing investors including Battery Ventures, Bessemer Ventures, ICV, ITI, WRVI Capital, Claltech all also participating. The total raised by the startup now stands at $188 million.
Koch Disruptive Technologies is the venture arm of Koch Industries, the multinational giant that works across a range of oil and gas, manufacturing, ranching and other industries. It’s founded by Fred Koch, the father of the Koch brothers, Charles and the late David, the longtime owners who are mostly known in popular culture for their strong support of right-wing politicians, businesses and causes. It’s an image that hasn’t really helped the VC arm and its partners seem to be trying to downplay it these days.
Putting that to one side, the Vayyar investment has a lot of potential applicability across the many industries where Koch has holdings.
“Advancements in imaging sensors are vital as technology continues to disrupt all aspects of society,” said Chase Koch, president of Koch Disruptive Technologies. “We see incredible potential in combining Vayyar’s innovative technology and principled leadership team with Koch’s global reach and capabilities to create breakthroughs in a wide range of industries.”
Over the last several years, the startup has indeed been working on a number of ways of applying its technology on behalf of clients, who in turn develop ways of productising it. There are a few exceptions where Vayyar itself has built ways of using its tech in direct consumer products: for example, the Walabot, a hand-held sensor that works in conjunction with a normal smartphone to give people the ability to, say, detect if a pipe is leaking behind a wall.
But for the most part, Melamed says that its focus has been on building technology for others to use. These have, for example, included in-car imaging sensors that can detect who is sitting where and what is going on inside the vehicle, useful for example for making sure that no one is dangerously blocking an airbag, or accidentally setting off a seatbelt alarm when not actually in a seat, or (in the case of a sleeping baby) being left behind on accident creating potentially dire outcomes.
Regulations will make having better safety detection a must over time, Melamed noted, and more immediately, “By 2022-2023 it will be a must for all new cars to be able to detect [the presence of babies getting left behind when you leave the car] if you want to have a five-star safety rating.”
The focus (no pun intended) on privacy is a somewhat secondary side-effect of what Vayyar has built to date, but that same swing of regulation is likely to continue to put it into the fore, and make it as much of a feature as the imaging detection itself.
Vayyar is not the only company using radar to build up better imaging intelligence: Entropix, Photonic Vision, Noitom Technology and Aquifi and ADI are among the many companies also building imaging solutions based on the same kind of technology. Melamed says that this is where the company’s software and algorithms help it to stand out.
“I think when you look at what we have developed for example for cars, these guys are far behind and it will take some time to close the gap,” he added.
Home furnishing retailer Wayfair was among the first to adopt AR technology as a means of helping people better visual furniture and accessories in their own home, ahead of purchase. Today, the company is expanding its feature set to allow for more visualization capabilities — even when you’re shopping out in the real world and aren’t able to take a photo of your room to use AR.
Instead, shoppers will be able to leverage a new feature called “Interactive Photo,” which lets shoppers take a photo of their room then visualize multiple products within it, even when they’re not home in their own space. The feature itself uses technology to understand the spatial information of the room in the image to give you an AR-like experience using your photo.
Alongside this addition, Wayfair has updated its app to put its camera tools more at the forefront of the app experience. Similar to how you can click a camera icon next to the Amazon app’s search bar, you can now do the same in Wayfair. You can also then toggle between the various camera-based features with swipe gestures, in order to move between Wayfair’s visual search and its “View in Room” AR feature, which is also where you’ll find the new “Interactive Photo.”
The retailer has also launched its room design tool, Room Planner 3D, on the mobile shopping app. This allows shoppers to create an interactive room 3D room that they can view from any angle, while testing out different layouts, styles, room dimensions and more.
The update follows Amazon’s launch earlier this year of its own visual shopping experience called Showroom, which let online and mobile shoppers try out furniture and other décor in a customizable virtual room where they pick the wall color, flooring, carpet and more.
“With the latest updates to the Wayfair app, we continue to push the limits of what’s possible by iterating on advanced AR and machine learning capabilities, and introducing new and innovative spatial awareness techniques to an e-commerce experience, bridging the gap between imagination and reality,” said Matt Zisow, Vice President of Product Management, Experience Design and Analytics at Wayfair, in a statement.
The new feature set comes shortly after Wayfair’s third-quarter earnings, where the company reported a wider-than-expected loss of $2.33 per share, adjusted, versus the expected $2.10 per share. Revenue was up 35% year-over-year to $2.3 billion, above the anticipated $2.27 billion, however. The company attributed the miss to “short-term headwinds from tariffs.”
However, as the holiday shopping season heats up, Wayfair still needs to unveil enticing features that will encourage consumers to redownload its app and shop — especially given that smartphones alone drove $2.1 billion in U.S. online sales last Black Friday.
The new Wayfair app is out now on iOS and Android, but the new features — Interactive Photo, Integrated Camera and Room Planner 3D — are only on iOS.
Microsoft’s big experiment in real-world augmented reality gaming, Minecraft Earth, is live now for players in North America, the U.K., and a number of other areas. The pocket-size AR game lets you collect blocks and critters wherever you go, undertake little adventures with friends, and of course build sweet castles.
I played an early version of Minecraft Earth earlier this year, and found it entertaining and the AR aspect surprisingly seamless. The gameplay many were first introduced to in Pokemon GO is adapted here in a more creative and collaborative way.
You still walk around your neighborhood, rendered in this case charmingly like a Minecraft world, and tap little icons that pop up around your character. These may be blocks you can use to build, animals you can collect, or events like combat encounters that you can do alone or with friends for rewards.
Ultimately all this is in service of building stuff, which you do on “build plates” of various sizes. These you place in AR mode on a flat surface, which they lock onto, letting you move around freely to edit and play with them. This sounded like it could be fussy or buggy when I first heard about it, but actually doing it was smooth and easy. It’s easy to “zoom in” to edit a structure by just moving your phone closer, and multiple people can play with the same blocks and plate at the same time.
Once you’ve put together something fun, you can take it to an outdoors location and have it represented at essentially “real” size, so you can walk around the interior of your castle or dungeon. Of course you can’t climb steps, since they’re not real, but the other aspects work as expected: you can manipulate doors and other items, breed cave chickens, and generally enjoy yourself.
The game is definitely more open-ended than the collection-focused Pokemon GO and Harry Potter: Wizards Unite. Whether that proves to be to its benefit or detriment when it comes to appeal and lasting power remains to be seen — but one thing is for sure: People love Minecraft and they’re going to want to at least try this out.
And now they can, if they’re in one of the following countries — with others coming throughout the holiday season.
No one’s going to pay $380 for decent point-of-view video glasses and some trippy filters. But that’s kind of the point of Snapchat Spectacles 3. They’re merely a stepping stone towards true augmented reality eyewear — a public hardware beta for the Snap Lab R&D team that Apple and Facebook aren’t getting as they tinker in their bunkers.
Still, I hoped for something that could at least unlock the talents of forward-thinking video creators. Yet the unpredictable and uncontrollable AR effects sadly fail to make use of Spectacles‘ fashionable form factor in premium steel. The clunky software requires clips be uploaded for processing and then re-downloaded before you can apply the 10 starter effects like a rainbow landscape filter or a shimmering fantasy falcon. This all makes producing AR content a chore instead of a joy for something only briefly novel.
Spectacles 3 go on sale today for $380 in black ‘Carbon’ or rose gold-ish ‘Mineral’ color schemes on Spectacles.com, Neiman Marcus, and Ron Robinson in the UK, shipping in a week. Announced in August, they’re sunglasses with two stereoscopic lenses capable of capturing depth to produce “3D” photos, and videos you can add AR effects to on your phone. You also get a very nice folds-flat leather USB-C charging case that powers up the glasses four times, and a Google Cardboard-style VR viewer.
“Spectacles 3 is a limited production run. We’re not looking for massive sales here. We’re targeting people who are excited about these effects — creative storytellers” says Matt Hanover of the Snap Lab team.
Gen 1 featured a “toy-like design to get people used to wearing tech on their face”, while Gen 2 and 2.1 had a more subdued look abandoning the coral color schemes to push mainstream adoption. What Gen 3 can’t do is force a $40 million write-off due to poor sales, as V1 did after only shipping 220,000 with hundreds of thousands more gathering dust somewhere. Snap is already losing $227 million per quarter as it scrambles to break even.
So it seems with Spectacles 3 that Snap is gathering data and biding its time, trying to avoid burning too much cash until it can build a version that overlays effects atop a user’s view through the glasses. “We’re still able to get feedback from the customer and inform the future of Spectacles. That’s really the goal for us” Hanover confirms.
His CEO Evan Spiegel agrees, telling me on stage at TechCrunch Disrupt that it would be 10 years until we see augmented reality glasses worthy of mainstream consumer adoption. That’s a long time for an unprofitable company to spend competing to invest in R&D versus cash-rich companies like Facebook and Apple.
Spectacles could be worth the steep $380 if you’re a videographer for a living, perhaps making futuristic social media clips like Karen X Cheng, a creator Snap hired to demonstrate the device’s potential. They’re cool enough looking that you could wear them around Cannes or Coachella without people getting weirded out like they did with Google Glass. And as Snap’s Lens Studio lets anyone build 3D effects for Spectacles 3, perhaps we’ll see some filters and imaginary characters that are more than just a momentary gimmick.
But for those simply seeking first-person camera glasses, I’d still recommend the Spectacles 2 at $150 to $200 depending on style. The 3D features don’t carry the weight of paying double the price for Spec 3s. And at least the 2nd-gen Specs are waterproof, which make them great for ocean play with fun underwater shooting when you don’t want to risk losing or fizzling your phone.
“We’re testing the price point and the premium aesthetic to see if it lands with this demographic” Hanover says. But Snap’s Director Of Communications Liz Markman notes that “there isn’t this perfect one-to-one overlap with the core Snap users.”
The result is that Spectacles 3 are really more for Snap’s benefit than yours.
The Spectacles 3 software is disappointing, but you’ll be delighted when you open the box. Slick black packaging reveal sturdily built metal sunglasses with a luxury matte finish. As they magnetically dislodge from their charging case, you definitely get they sense you’re trying on something futuristic.
The style concurs, with a flat black bar at the top connecting the round lenses with a camera on both corners. Unlike the old Specs that sat right on your nose, feeling heavy at times, Spectacles 3 offers adjustable acetate non-slip nose tips to keep the weight off. All the tech is built discreetly into the hinges and temples without appearing too chunky.
Tap the button either arm, and LED light swooshes in a circle to let people know you’re recording a video for 10 seconds, with multiple presses growing that to up to 60. Tap and hold to shoot a photo, and the light blinks. There’s no obnoxious yellow rubber ring to shout “these are cameras”, and the defused LEDs are more subtle than Gen 2’s dots while remaining an obvious enough signal to passersby so they’re not creepy.
One charge powers up to 70 captures and transfers to your phone over a combined Bluetooth built-in Wifi connection. The 4 gigabyte storage holds up to 100 videos or 1200 photos, and Spectacles 3 even have GPS and GLOSNASS on-board. A 4-mic array picks up audio from others and your own voice, though they’re susceptible to windshear if you’re biking or running.
The magnetically-sealing folding leather USB-C charging case is my favorite part. I wish I could get an even flatter one without a battery in it for my other sunglasses. It’s a huge improvement on the unpocketable bulky triangular case of the previous versions.
So far so good, right? But then it comes time to actually see and augment what you shot.
Pairing and syncing is much easier than Gen 1. The glasses forge a Bluetooth connection, then spawn a WiFi network for getting media to your phone faster.
If you just want to share to Snapchat, you’re in luck. Spectacles content posts to Stories or messages in its cool circular format that lets viewers tilt their phones around while always staying full-screen to reveal the edges of your shots. Otherwise, you still have to go through the chore of exporting from Snapchat to your camera roll. Spectacles can at least now export in a variety of croppings for better sharing on Instagram and elsewhere.
What’s new are the 3D photos and videos. They utilize the space between the stereoscopic cameras in the corners of Spectacles employ parallax to sense the depth of a scene. After tapping the 3D button on a photo, you can wiggle the perspective of the image around to almost see around the edges of what you’re looking at. Spectacles will automatically pan back and forth for you, and export 3D photos as short Boomerang-esque six-second videos.
Unfortunately, I found that I didn’t get much sense of depth from most of the 3D photos I shot or saw. It takes a very particular kind of three-dimensional object from the right angle in the right light to much sense of movement from the wiggle. Snapchat’s algorithms also had a bad habit of mistakenly assigning bits of the foreground and background to each other, breaking the illusion. Occasionally you’ll have someone’s ear or their hair left behind and disembodied by the 3D effect. Don’t expect these to flood social media or convince prospective Spectacles buyers.
The biggest problem comes with the delay when playing with 3D videos. Snapchat has to do the depth processing on its servers, so you have to wait for your video to upload, get scanned, and be re-downloaded before you can apply the 3D AR filters. On WiFi that takes about 35 seconds per 10 second video, which is quite a bore. It takes forever over a mobile connection. That means you often won’t be able to apply the filters and see how they look until you’re home and unable to reshoot anything.
The filter set is also limited and haphazard. You can add a 3D bird or balloons around you, wander through golden snow or neon arcs, overlay flower projections or rainbow waves, or sprinkle on sparkles and light-bending blobs. While the bird is cute, and the rainbows and flowers are remarkably psychedelic, none of them are more than briefly entertaining.
The 3D objects often glitch through real pieces of scenery, and you can’t control them at all. No summoning the bird mid-video. My favorite trick, learned from Karen X Cheng, was to export unedited and filtered versions of a video and splice them together on my computer as scene in my demo video above. You can’t actually do that from within Snapchat.
Snap will have to build a lot cooler filters with interactivity if they’re going to compel creators to fork over $380 for Spectacles 3. It could hope to rely on its Lens Studio community platform, but so few developers or users will have the glasses that most will stick to making and using filters for phones.
Spectacles 3 are too expensive to be a toy, but don’t excel at being much more. Videography influencers might enjoy having a pair in their tool bag. But it’s hard to imagine anyone not sharing content professionally paying for the gadget.
“We’re now pushing to elevate the technology and the design to master depth technically” Hanover tells me. “Holing ourselves up within an R&D center for years and years? That’s not our approach. It’s important to meet the customer where they are today and continue to iterate and get that feedback.”
But this iteration doesn’t feel like Snap meeting the customer where they are. That raises the question of whether Snapchat is really getting enough data out of the whole endeavor to justify publicly releasing Spectacles at all. The company will have to hope that testing short-term is worth thinking short-term, when it’s trying to win the long-term war in augmented reality eyewear.
Only a few years ago, Microsoft hoped that Cortana could become a viable competitor to the Google Assistant, Alexa and Siri . Over time, as Cortana failed to make a dent in the marketplace (do you ever remember that Cortana is built into your Windows 10 machine?), the company’s ambitions shrunk a bit. Today, Microsoft wants Cortana to be your personal productivity assistant — and to be fair, given the overall Microsoft ecosystem, Cortana may be better suited to that than to tell you about the weather.
At its Ignite conference, Microsoft today announced a number of new features that help Cortana to become even more useful in your day-to-day work, all of which fit into the company’s overall vision of AI as a tool that is helpful and augments human intelligence.
The first of these is a new feature in Outlook for iOS that uses Microsoft text-to-speech features to read your emails to you (using both a male and female voice). Cortana can also now help you schedule meetings and coordinate participants, something the company first demoed at previous conferences.
Starting next month, Cortana will also be able to send you a daily email that summarizes all of your meetings, presents you with relevant documents and reminders to “follow up on commitments you’ve made in email.” This last part, especially, should be interesting as it seems to go beyond the basic (and annoying) nudges to reply to emails in Google’s Gmail.
Adobe’s ambitions around augmented reality (AR) are no secret — there’s plenty of potential for building the right design tools for AR developers, after all. At last year’s Max event, the company first demoed its Aero AR authoring app and today, it is launching it to the public as a free app on iOS and as a private beta on the desktop.
The general idea behind Aero is to allow designers to build AR experiences without coding. It offers a visual user interface and provides step-by-step directions for building AR scenes, which can incorporate existing assets from your Creative Cloud library, both in 2D and 3D. Once finished, publishing the scene to the Aero app only takes a few clicks.
“For marketing and branding, to retail and commerce, travel and leisure, learning and art, AR is expanding across all industries,” “However, today, the creation of high-quality AR content is expensive, time-consuming and complex. Our vision is to transform this process and enable all designers to explore what’s possible with 3D and AR.”
You can use the mobile app to create some basic experiences, but for the full slew of AR design tools, you’ll need the desktop app, which is coming next year, but which is now in private beta. This desktop app will allow you to build more interactive and custom experiences, Adobe says.
In the demo I’ve seen, Aero is indeed extremely easy to use. You can easily bring in Photoshop files with layers, for example, as a background and then space those layers out as needed to create a more 3D-like scene. Interacting with the object is done through touch interactions, with virtually no menus. You can add some basic animations as well and trigger movements, too.
The world of phone-based AR has involved a lot of promises, but the future that’s developed has so far been more iterative and less platform shift-y. For startups exclusively focused on mobile AR, there’s been some soul-searching to find ways to bring more lightweight experiences to life that don’t require as much friction or commitment from users.
8th Wall is a team focused on building developer tools for mobile AR experiences. The startup has raised more than $10 million to usher developers into the augmented world.
The company announced this week that they’ve built a one-stop shop authoring platform that will help its customers create and ship AR experiences that will be hosted by 8th Wall . It’s a step forward in what they’ve been trying to build and a further sign that marketing activations are probably the most buoyant money-makers in the rather flat phone-based AR space at the moment.
The editor supports popular immersive web frameworks like A-Frame, three.js and Babylon.js. It’s a development platform, but while game engine tools like Unity have features focused on heavy rendering, 8th Wall is more interested in “very fast, lightweight projects that can be built up to any scale,” the startup’s CEO Erik Murphy tells TechCrunch.
8th Wall’s initial sell was an augmented reality platform akin to ARKit and ARCore that allowed developers to build content that supported a wider breadth of smartphones. Today, 8th Wall’s team of 14 is focused on a technology called WebAR that allows mobile phones to call up web experiences inside the browser.
The main sell of WebAR is the same appeal of web apps; users don’t need to download anything and they can access the experience with just a link. This is great for branded marketing interactions, where expecting users to download an app is pretty laughable; moving this process to the web with a link or a QR code makes life much easier.
The startup’s cloud-based authoring and hosting platform is available now for its agency and