FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Space is the next economic frontier… hear more about it at Disrupt SF

By Devin Coldewey

For decades space has been the play place for world powers, but the advent of (relatively) cheap and frequent rocket launches has opened it up for new business opportunities. But it’s still hard as hell, as early adopters of this orbital economy Tess Hatch of Bessemer Ventures, Swarm’s Sara Spangelo and OneWeb’s Adrian Steckel can attest. They’ll be on the Extra Crunch stage at Disrupt SF 2019 on October 3rd at 1:40 PM.

Spangelo and Steckel are in the midst of launching what have been termed “mega-constellations,” collections of hundreds or thousands of satellites offering a coordinated service (in their cases, global connectivity). These efforts are only possible with the new launch economy, and came hot on its heels, showing there’s no reason to wait to put new plans in action.

But such constellations bring their own challenges. Just from an orbital logistics point of view, launching a single satellite so that it enters a unique and predictable trajectory is hard enough; launching a dozen or a hundred at once is more difficult by far. And after launch, how will those satellites be tracked? How will they communicate to the surface and each other? What about the growing risk of collisions?

On top of that are more terrestrial, but no less crucial, questions: What services can be made available from orbit? What’s a reasonable amount to spend on them? How will they compete with and accommodate one another? Whose regulations will they follow?

These latter questions are among those that must also be answered by investors like Hatch, who is familiar with both the technical and capital side of the burgeoning space industry (and of course the technical side of the capital side). Space ventures can be extremely expensive and high-risk, but to get your foot in the door at this stage could be the start of a billion-dollar advantage a couple of years down the line.

If you’re planning on getting involved with the new space economy, or are just curious about it, join us for an extended discussion and Q&A on the 3rd.

Disrupt SF runs October 2 to October 4 at the Moscone Center in San Francisco. Tickets are available here, and they just happen to be available at a discount today only.

Voyage raises $31 million to bring driverless taxis to communities

By Kirsten Korosec

Voyage, the autonomous vehicle startup that spun out of Udacity, announced Thursday it has raised $31 million in a round led by Franklin Templeton.

Khosla Ventures, Jaguar Land Rover’s InMotion Ventures and Chevron Technology Ventures also participated in the round. The company, which operates a ride-hailing service in retirement communities using self-driving cars supported by human safety drivers, has raised a total of $52 million since launching in 2017. The new funding includes a $3 million convertible note.

Voyage CEO Oliver Cameron has big plans for the fresh injection of capital, including hiring and expanding its fleet of self-driving Chrysler Pacifica minivans, which always have a human safety driver behind the wheel.

Ultimately, the expanded G2 fleet and staff are just the means toward Cameron’s grander mission to turn Voyage into a truly driverless and profitable ride-hailing company.

“It’s not just about solving self-driving technology,” Cameron told TechCrunch in a recent interview, explaining that a cost-effective vehicle designed to be driverless is the essential piece required to make this a profitable business.

The company is in the midst of a hiring campaign that Cameron hopes will take its 55-person staff to more than 150 over the next year. Voyage has had some success attracting high-profile people to fill executive-level positions, including CTO Drew Gray, who previously worked at Uber ATG, Otto, Cruise and Tesla, as well as former NIO and Tesla employee Davide Bacchet as director of autonomy.

Funds will also be used to increase its fleet of second-generation self-driving cars (called G2) that are currently being used in a 4,000-resident retirement community in San Jose, Calif., as well as The Villages, a 40-square-mile, 125,000-resident retirement city in Florida. Voyage’s G2 fleet has 12 vehicles. Cameron didn’t provide details on how many vehicles it will add to its G2 fleet, only describing it as a “nice jump that will allow us to serve consumers.”

Voyage used the G2 vehicles to create a template of sorts for its eventual driverless vehicle. This driverless product — a term Cameron has used in a previous post on Medium — will initially be limited to 25 miles per hour, which is the driving speed within the two retirement communities in which Voyage currently tests and operates. The vehicle might operate at a low speed, but they are capable of handling complex traffic interactions, he wrote.

“It won’t be the most cost-effective vehicle ever made because the industry still is in its infancy, but it will be a huge, huge, huge improvement over our G2 vehicle in terms of being be able to scale out a commercial service and make money on each ride,” Cameron said. 

Voyage initially used modified Ford Fusion vehicles to test its autonomous vehicle technology, then introduced in July 2018 Chrysler Pacifica minivans, its second generation of autonomous vehicles. But the end goal has always been a driverless product.

Voyage engineers Alan Mond and Trung Dung Vu

TechCrunch previously reported that the company has partnered with an automaker to provide this next-generation vehicle that has been designed specifically for autonomous driving. Cameron wouldn’t name the automaker. The vehicle will be electric and it won’t be a retrofit like the Chrysler Pacifica Hybrid vehicles Voyage currently uses or its first-generation vehicle, a Ford Fusion.

Most importantly, and a detail Cameron did share with TechCrunch, is that the vehicle it uses for its driverless service will have redundancies and safety-critical applications built into it.

Voyage also has deals in place with Enterprise rental cars and Intact insurance company to help it scale.

“You can imagine leasing is much more optimal than purchasing and owning vehicles on your balance sheet,” Cameron said. “We have those deals in place that will allow us to not only get the vehicle costs down, but other aspects of the vehicle into the right place as well.”

Starship Technologies CEO Lex Bayer on focus and opportunity in autonomous delivery

By Darrell Etherington

Starship Technologies is fresh off a recent $40 million funding round, and the robotics startup finds itself in a much-changed market compared to when it got its start in 2014. Founded by software industry veterans, including Skype and Rdio co-founder Janus Friis, Starship’s focus is entirely on building and commercializing fleets of autonomous sidewalk delivery robots.

Starship invented this category when it debuted, but five years later it’s one of a number of companies looking to deploy what essentially amounts to wheeled, self-driven coolers that can carry small packages and everyday freight, including fresh food, to waiting customers. CEO Lex Bayer, a former sales leader from Airbnb, took over the top spot at Starship last year and is eager to focus the company’s efforts in a drive to take full advantage of its technology and experience lead.

The result is transforming what looked, to all external observers, like a long-tail technology play into a thriving commercial enterprise.

“We want to do 100 universities in the next 24 months, and we’ll do about 25 to 50 robots on each campus,” Bayer said in an interview about his company’s plans for the future.

Former Google X exec Mo Gawdat wants to reinvent consumerism

By Frederic Lardinois

Mo Gawdat, the former Google and Google X executive, is probably best known for his book Solve for Happy: Engineer Your Path to Joy. He left Google X last year. Quite a bit has been written about the events that led to him leaving Google, including the tragic death of his son. While happiness is still very much at the forefront of what he’s doing, he’s also now thinking about his next startup: T0day.

To talk about T0day, I sat down with the Egypt-born Gawdat at the Digital Frontrunners event in Copenhagen, where he gave one of the keynote presentations. Gawdat is currently based in London. He has adopted a minimalist lifestyle, with no more than a suitcase and a carry-on full of things. Unlike many of the Silicon Valley elite that have recently adopted a kind of performative aestheticism, Gawdat’s commitment to minimalism feels genuine — and it also informs his new startup.

07 28 19 Frontrunner 38“In my current business, I’m building a startup that is all about reinventing consumerism,” he told me. “The problem with retail and consumerism is it’s never been disrupted. E-commerce, even though we think is a massive revolution, it’s just an evolution and it’s still tiny as a fraction of all we buy. It was built for the Silicon Valley mentality of disruption, if you want, while actually, what you need is cooperation. There are so many successful players out there, so many efficient supply chains. We want the traditional retailers to be successful and continue to make money — even make more money.”

What T0day wants to be is a platform that integrates all of the players in the retail ecosystem. That kind of platform, Gawdat argues, never existed before, “because there was never a platform player.”

That sounds like an efficient marketplace for moving goods, but in Gawdat’s imagination, it is also a way to do good for the planet. Most of the fuel burned today isn’t for moving people, he argues, but goods. A lot of the food we buy goes to waste (together with all of the resources it took to grow and ship it) and single-use plastic remains a scourge.

How does T0day fix that? Gawdat argues that today’s e-commerce is nothing but a digital rendering of the same window shopping people have done for ages. “You have to reimagine what it’s like to consume,” he said.

The reimagined way to consume is essentially just-in-time shipping for food and other consumer goods, based on efficient supply chains that outsmart today’s hub and spoke distribution centers and can deliver anything to you in half an hour. If everything you need to cook a meal arrives 15 minutes before you want to start cooking, you only need to order the items you need at that given time and instead of a plastic container, it could come a paper bag. “If I have the right robotics and the right autonomous movements — not just self-driving cars, because self-driving cars are a bit far away — but the right autonomous movements within the enterprise space of the warehouse, I could literally give it to you with the predictability of five minutes within half an hour,” he explained. “If you get everything you need within half an hour, why would you need to buy seven apples? You would buy three.”

Some companies, including the likes of Uber, are obviously building some of the logistics networks that will enable this kind of immediate drop shipping, but Gawdat doesn’t think Uber is the right company for this. “This is going to sound a little spiritual. There is what you do and there is the intention behind why you do it,” he said. “You can do the exact same thing with a different intention and get a very different result.”

That’s an ambitious project, but Gawdat argues that it can be done without using massive amounts of resources. Indeed, he argues that one of the problems with Google X, and especially big moonshot projects like Loon and self-driving cars, was that they weren’t really resource-constrained. “Some things took longer than they should have,” he said. “But I don’t criticize what they did at all. Take the example of Loon and Facebook. Loon took longer than it should have. In my view, it was basically because of an abundance of resources and sometimes innovation requires a shoestring. That’s my only criticism.”

T0day, which Gawdat hasn’t really talked about publicly in the past, is currently self-funded. A lot of people are advising him to raise money for it. “We’re getting a lot of advice that we shouldn’t self-fund,” he said, but he also believes that the company will need some strategic powerhouses on its side, maybe retailers or companies that have already invested in other components of the overall platform.

T0day’s ambitions are massive, but Gawdat thinks that his team can get the basic elements right, be that the fulfillment center design or the routing algorithms and the optimization engines that power it all. He isn’t ready to talk about those, though. What he does think is that T0day won’t be the interface for these services. It’ll be the back end and allow others to build on top. And because his previous jobs have allowed him to live a comfortable life, he isn’t all that worried about margins either, and would actually be happy if others adopted his idea, thereby reducing waste.

The Void’s Curtis Hickman on scaling, creative IP and the future of VR experiences

By Greg Kumparak

What can you do with virtual reality when you have complete control of the physical space around the player? How “real” can virtual reality become?

That’s the core concept behind The Void. They take over retail spaces in places like Downtown Disney and shopping malls around the country and turn them into virtual reality playgrounds, They’ve got VR experiences based on properties like Star Wars, Ghostbusters, and Wreck-It Ralph; while these big names tend to be the main attractions, they’re dabbling with creating their own original properties, too.

By building both the game environment and the real-world rooms in which players wander, The Void can make the physical and virtual align. If you see a bench in your VR headset, there’s a bench there in the real world for you to sit on; if you see a lever on the wall in front of you, you can reach out and physically pull it. Land on a lava planet and heat lamps warm your skin; screw up a puzzle, and you’ll feel a puff of mist letting you know to try something else.

At $30-$35 per person for what works out to be a roughly thirty-minute experience (about ten of which is watching a scene-setting video and getting your group into VR suits), it’s pretty pricey. But it’s also some of the most mind-bending VR I’ve ever seen.

The Void reportedly raised about $20 million earlier this year and is in the middle of a massive expansion. It’s more than doubling its number of locations, opening 25 new spots in a partnership with the Unibail-Rodamco-Westfield chain of malls.

I sat down to chat with The Void’s co-founder and Chief Creative Officer, Curtis Hickman, to hear how they got started, how his background (in stage magic!) comes into play here, how they came to work with massive properties like Ghostbusters and Star Wars, and where he thinks VR is going from here.

Greg Kumparak: Tell me a bit about yourself. How’d you get your start? How’d you get into making VR experiences?

Why now is the time to get ready for quantum computing

By Frederic Lardinois

For the longest time, even while scientists were working to make it a reality, quantum computing seemed like science fiction. It’s hard enough to make any sense out of quantum physics to begin with, let alone the practical applications of this less than intuitive theory. But we’ve now arrived at a point where companies like D-Wave, Rigetti, IBM and others actually produce real quantum computers.

They are still in their infancy and nowhere near as powerful as necessary to compute anything but very basic programs, simply because they can’t run long enough before the quantum states decohere, but virtually all experts say that these are solvable problems and that now is the time to prepare for the advent of quantum computing. Indeed, Gartner just launched a Quantum Volume metric, based on IBM’s research, that looks to help CIOs prepare for the impact of quantum computing.

To discuss the state of the industry and why now is the time to get ready, I sat down with IBM’s Jay Gambetta, who will also join us for a panel on Quantum Computing at our TC Sessions: Enterprise event in San Francisco on September 5, together with Microsoft’s Krysta Svore and Intel’s Jim Clark.

The risks of amoral AI

By Jonathan Shieber
Kyle Dent Contributor
Kyle Dent is a Research Area Manager for PARC, a Xerox Company, focused on the interplay between people and technology. He also leads the ethics review committee at PARC.

Artificial intelligence is now being used to make decisions about lives, livelihoods and interactions in the real world in ways that pose real risks to people.

We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.

It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.

But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose tradeoffs that affect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.

These tradeoffs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.

The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.

The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a tradeoff between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Profit interests are pushing hard to get them on the road immediately.

Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to benefit the world. Ideally, we mitigate for the downsides in order to get the benefits with minimal harm.

A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm. 

Buyer Beware

Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.

So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.

Image via Getty Images / sorbetto

The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their efforts to test the transparency around government adoption of data analytics tools for predictive algorithms. They filed forty-two open records requests to various public agencies about their use of decision-making support tools.

Their “specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and confidentiality claims were also a significant factor.

Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can benefit from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and benefit the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.

All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s profit interest outweighs a defendant’s right to due process was affirmed by that state’s supreme court in 2016.

Fairness is in the eye of the beholder

Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.

In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever definition of accuracy they assume in their modeling.

I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a different standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.

As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to find it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.

A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.

In their formalization, in most situations, differing ideas about what it means to be fair are not just different but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.

Image courtesy of TechCrunch/Bryce Durbin

When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artificial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.

Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts off input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to difficult questions. 

Who will watch the (AI) watchers?

One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.

Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.

GettyImages 823303786

Image via Getty Images / nadia_bormotova

Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.

This has apparently been effective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions affecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of offending some wonderful colleagues, the inclination to think about these issues.

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.

Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in effect, penalizing women who applied. In this case, Amazon was sufficiently motivated to ensure their own technology was working as effectively as possible, but will other companies be as vigilant?

As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.

With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and trade-offs.

At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.

Watch a Waymo self-driving car test its sensors in a haboob

By Kirsten Korosec

Waymo, the self-driving car company under Alphabet, has been testing in the suburbs of Phoenix for several years now. And while the sunny metropolis might seem like the ideal and easiest location to test autonomous vehicle technology, there are times when the desert becomes a dangerous place for any driver — human or computer.

The two big safety concerns in this desert region are sudden downpours that cause flash floods and haboobs, giant walls of dust between 1,500 and 3,000 feet high that can cover up to 100 square miles. One record-breaking haboob in July 2011 covered the entire Phoenix valley, an area of more than 517 square miles.

Waymo released Friday a blog post that included two videos showing how the sensors on its self-driving vehicles detect and recognize objects while navigating through a haboob in Phoenix and fog in San Francisco. The vehicle in Phoenix was manually driven, while the one in the fog video was in autonomous mode.

The point of the videos, Waymo says, is to show how, and if, the vehicles recognize objects during these extreme low visibility moments. And they do. The haboob video shows how its sensors work to identify a pedestrian crossing a street with little to no visibility.

Waymo uses a combination of lidar, radar and cameras to detect and identify objects. Fog, rain or dust can limit visibility in all or some of these sensors.

Waymo doesn’t silo the sensors affected by a particular weather event. Instead, it continues to take in data from all the sensors, even those that don’t function as well in fog or dust, and uses that collective information to better identify objects.

The potential is for autonomous vehicles to improve on visibility, one of the greatest performance limitations of humans, Debbie Hersman, Waymo’s chief safety officer wrote in the blog post. If Waymo or other AV companies are successful, they could help reduce one of the leading contributors to crashes. The Department of Transportation estimates that weather contributes to 21% of the annual U.S. crashes.

Still, there are times when even an autonomous vehicle doesn’t belong on the road. It’s critical for any company planning to deploy AVs to have a system that can not only identify, but also take the safest action if conditions worsen.

Waymo vehicles are designed to automatically detect sudden extreme weather changes, such as a snowstorm, that could impact the ability of a human or an AV to drive safely, according to Hersman.

The question is what happens next. Humans are supposed to pull over off the road during a haboob and turn off the vehicle, a similar action when one encounters heavy fog.  Waymo’s self-driving vehicles will do the same if weather conditions deteriorate to the point that the company believes it would affect the safe operation of its cars, Hersman wrote.

The videos and blog post are the latest effort by Waymo to showcase how and where it’s testing. The company announced August 20 that it has started testing how its sensors handle heavy rain in Florida. The move to Florida will focus on data collection and testing sensors; the vehicles will be manually driven for now.

Waymo also tests (or has tested) its technology in and around Mountain View, Calif., Novi, Mich., Kirkland, Wash. and San Francisco. The bulk of the company’s activities have been in suburbs of Phoenix  and around Mountain View.

Remediant lands $15M Series A to disrupt privileged access security

By Ron Miller

Remediant, a startup that helps companies secure privileged access in a modern context, announced a $15 million Series A today led by Dell Technologies Capital and ForgePoint Capital.

Remediant’s co-founders, Paul Lanzi and Tim Keeler, worked in biotech for years and saw a problem first-hand with the way companies secured privileged access. It was granted to certain individuals in the organization carte blanche, and they believed if you could limit access, it would make the space more secure and less vulnerable to hackers.

Lanzi says they started the company with two core concepts. “The first concept is the ability to assess or detect all of the places where privileged accounts exist and what systems they have access to. The second concept is to strip away all of the privileged access from all of those accounts and grant it back on a just-in-time basis,” Lanzi explained.

If you’re thinking that could get in the way of people who need access to do their jobs, as former IT admins, they considered that. Remediant is based a Zero Trust model where you have to prove you have the right to access the privileged area. But they do provide a reasonable baseline amount of time for users who need it within the confines of continuously enforcing access.

“Continuous enforcement is part of what we do, so by default we grant you four hours of access when you need that access, and then after that four hours, even if you forget to come back and end your session, we will automatically revoke that access. In that way all of the systems that are protected by SecureOne (the company’s flagship product) are held in this Zero Trust state where no one has access to them on a day-to-day basis,” Lanzi said.

Remediant SecureONE Dashboard

Remediant SecureONE Dashboard. Screenshot: Remediant

The company has bootstrapped until now, and has actually been profitable, something that’s unusual for a startup at this stage of development, but Lanzi says they decided to take an investment in order to shift gears and concentrate on growth and product expansion.

Deepak Jeevankumar, managing director at investor Dell Technologies Capital says it’s not easy for security startups to rise above the noise, but he saw something in Remediant’s founders. “Tim, and Paul came from the practitioners viewpoint. They knew the actual problems that people face in terms of privileged access. So they had a very strong empathy towards the customer’s problem because they lived through it,” Jeevankumar told TechCrunch.

He added that the privileged access market hasn’t really been updated in two decades. “It’s a market ripe for disruption. They are combining the just-in-time philosophy with the Zero Trust philosophy, and are bringing that to the crown jewel of administrative access,” he said.

The company’s tools are installed on the customer’s infrastructure, either on-prem or in the cloud. They don’t have a pure cloud product at the moment, but they have plans for a SaaS version down the road to help small and medium sized businesses solve the privileged access problem.

Lanzi says they are also looking to expand the product line in other ways with this investment. “The basic philosophies that underpin our technology are broadly applicable. We want to start applying our technology in those other areas as well. So as we think toward a future that looks more like cloud and more like DevOps, we want to be able to add more of those features to our products,” he said.

Porsche invests in ‘low visibility’ sensor startup TriEye

By Kirsten Korosec

Porsche’s venture arm has acquired a minority stake in TriEye, an Israeli startup that’s working on a sensor technology to help vehicle driver-assistance and self-driving systems see better in poor weather conditions like dust, fog and rain.

The strategic investment is part of a Series A financing round that has been expanded to $19 million. The round was initially led by Intel Capital and Israeli venture fund Grove Ventures. Porsche has held shares in Grove Ventures since 2017.

TriEye has raised $22 million to date. Terms of Porsche’s investment were not disclosed.

The additional funding will be used for ongoing product development, operations and hiring talent, according to TriEye.

The advanced driver-assistance systems found in most new vehicles today typically rely on a combination of cameras and radar to “see.” Autonomous vehicle systems, which are being developed and tested by dozens of companies such as Argo AI, Aptiv, Aurora, Cruise and Waymo, have a more robust suite of sensors that include light detection and ranging radar (lidar) along with cameras and ultrasonic sensors.

For either of these systems to function properly, they need to be able to see in all conditions. This pursuit of sensor technology has sparked a boom in startups hoping to tap into demand from automakers and companies working on self-driving car systems.

TriEye is one of them. The premise of TriEye is to solve the low visibility problem created by poor weather conditions. The startup’s co-founders argue that fusing existing sensors such as radar, lidar and standard cameras don’t solve this problem.

TriEye, which was founded in 2017, believes the answer is through short-wave infrared (SWIR) sensors. The startup said it has developed an HD SWIR camera that is a smaller size, higher resolution and cheaper than other technologies. The camera is due to launch in 2020.

The technology is based on advanced nano-photonics research by Uriel Levy, a TriEye co-founder and CTO who is also a professor at the Hebrew University of Jerusalem.

The company says its secret sauce is its “unique” semiconductor design that will make it possible to manufacture SWIR HD cameras at a “fraction of their current cost.”

TriEye’s technology was apparently good enough to get Porsche’s attention.

Michael Steiner, a Porsche AG board member focused on R&D, said the technology was promising, as was the team, which is comprised of people with expertise in deep learning, nano-photonics and semiconductor components.

“We see great potential in this sensor technology that paves the way for the next generation of driver assistance systems and autonomous driving functions,” Steiner said in a statement. “SWIR can be a key element: it offers enhanced safety at a competitive price.”

Postmates lands permit to test its Serve autonomous delivery robots in SF

By Darrell Etherington

Postmates has officially received the green light from the city of San Francisco to begin testing its Serve wheeled delivery robot on city streets, as first reported by the SF Chronicle and confirmed with Postmates by TechCrunch. The on-demand delivery company told us last week that it expected the issuance of the permit to come through shortly after a conditional approval, and that’s exactly what happened on Wednesday thes week.

The permit doesn’t cover the entire city – just a designated area of a number of blocks in and around Potrero Hill and the Inner Mission, but it will allow Postmates to begin testing up to three autonomous delivery robots at once, at speeds of up to 3 mph. Deliveries can only take place between 8 AM and 6:30 PM on weekdays, and a human has to be on hand within 30 feet of the vehicles while they’re operating at all times. Still, it’s a start, and green light for a city regulatory environment that has had a somewhat rocky start with some less collaborative early pilots from other companies.

Autonomous delivery bot company Marble also has a permit application pending with the city’s Public Works department, and will look to test its own four-wheeled, sensor-equipped rolling delivery bots within the city soon should it be granted similar testing approval.

Postmates first revealed Serve last December, taking a more anthropomorphic approach to the vehicle’s overall design. Like many short-distance delivery robots of its ilk, it includes a lockable cargo container and screen-based user interface for eventual autonomous deliveries to customers. The competitive field for autonomous rolling delivery bots is growing continuously, with companies like Starship Technologies, Amazon and many more throwing their hats in the ring.

UPS takes minority stake in self-driving truck startup TuSimple

By Kirsten Korosec

UPS said Thursday it has taken a minority stake in self-driving truck startup TuSimple just months after the two companies began testing the use of autonomous trucks in Arizona.

The size of minority investment, which was made by the company’s venture arm UPS Ventures, was not disclosed. The investment and the testing comes as UPS looks for new ways to remain competitive, cut costs and boost its bottom line.

TuSimple, which launched in 2015 and has operations in San Diego and Tucson, Arizona, believes it can deliver. The startup says it can cut average purchased transportation costs by 30%.

TuSimple, which is backed by Nvidia, ZP Capital and Sina Corp., is working on a “full-stack solution,” a wonky industry term that means developing and bringing together all of the technological pieces required for autonomous driving. TuSimple is developing a Level 4 system, a designation by the SAE that means the vehicle takes over all of the driving in certain conditions.

An important piece of TuSimple’s approach is its camera-centric perception solution. TuSimple’s camera-based system has a vision range of 1,000 meters, the company says.

The days of when highways will be filled with autonomous trucks are years away. But UPS believes it’s worth jumping in at an early stage to take advantage of some of the automated driving such as advanced braking technology that TuSimple can offer today.

“UPS is committed to developing and deploying technologies that enable us to operate our global logistics network more efficiently,” Scott Price, chief strategy officer at UPS said in a statement. “While fully autonomous, driverless vehicles still have development and regulatory work ahead, we are excited by the advances in braking and other technologies that companies like TuSimple are mastering. All of these technologies offer significant safety and other benefits that will be realized long before the full vision of autonomous vehicles is brought to fruition — and UPS will be there, as a leader implementing these new technologies in our fleet.”

UPS initially tapped TuSimple to help it better understand how Level 4 autonomous trucking might function within its network. That relationship expanded in May when the companies began using self-driving tractor trailers to carry freight on a freight route between Tucson and Phoenix to test if service and efficiency in the UPS network can be improved. This testing is ongoing. All of TuSimple’s self-driving trucks operating in the U.S. have a safety driver and an engineer in the cab.

TuSimple and UPS monitor all aspects of these trips, including safety data, transport time and the distance and time the trucks travel autonomously, the companies said Thursday.

UPS isn’t the only company that TuSimple is hauling freight for as part of its testing. TuSimple has said its hauling loads for for several customers in Arizona.  The startup has a post-money valuation of $1.095 billion (aka unicorn status).

Artificial intelligence can contribute to a safer world

By Jonathan Shieber
Matt Ocko Contributor
Matt Ocko is co-Managing Partner and co-founder of DCVC (Data Collective).
Alan Cohen Contributor
Alan Cohen is an operating partner at DCVC.

We all see the headlines nearly every day. A drone disrupting the airspace in one of the world’s busiest airports, putting aircraft at risk (and inconveniencing hundreds of thousands of passengers) or attacks on critical infrastructure. Or a shooting in a place of worship, a school, a courthouse. Whether primitive (gunpowder) or cutting-edge (unmanned aerial vehicles) in the wrong hands, technology can empower bad actors and put our society at risk, creating a sense of helplessness and frustration.

Current approaches to protecting our public venues are not up to the task, and, frankly appear to meet Einstein’s definition of insanity: “doing the same thing over and over and expecting a different result.” It is time to look past traditional defense technologies and see if newer approaches can tilt the pendulum back in the defender’s favor. Artificial Intelligence (AI) can play a critical role here, helping to identify, classify and promulgate counteractions on potential threats faster than any security personnel.

Using technology to prevent violence, specifically by searching for concealed weapons has a long history. Alexander Graham Bell invented the first metal detector in 1881 in an unsuccessful attempt to locate the fatal slug as President James Garfield lay dying of an assassin’s bullet. The first commercial metal detectors were developed in the 1960s. Most of us are familiar with their use in airports, courthouses and other public venues to screen for guns, knives and bombs.

However, metal detectors are slow and full of false positives – they cannot distinguish between a Smith & Wesson and an iPhone. It is not enough to simply identify a piece of metal; it is critical to determine whether it is a threat. Thus, the physical security industry has developed newer approaches, including full-body scanners – which are now deployed on a limited basis. While effective to a point, the systems in use today all have significant drawbacks. One is speed. Full body scanners, for example, can process only about 250 people per hour, not much faster than a metal detector. While that might be okay for low volume courthouses, it’s a significant problem for larger venues like a sporting arena.

Image via Getty Images

Fortunately, new AI technologies are enabling major advances in physical security capabilities. These new systems not only deploy advanced sensors to screen for guns, knives and bombs, they get smarter with each screen, creating an increasingly large database of known and emerging threats while segmenting off alarms for common, non-threatening objects (keys, change, iPads, etc.)

As part of a new industrial revolution in physical security, engineers have developed a welcomed approach to expediting security screenings for threats through machine learning algorithms, facial recognition, and advanced millimeter wave and other RF sensors to non-intrusively screen people as they walk through scanning devices. It’s like walking through sensors at the door at Nordstrom, the opposite of the prison-like experience of metal detectors with which we are all too familiar. These systems produce an analysis of what someone may be carrying in about a hundredth of a second, far faster than full body scanners. What’s more, people do not need to empty their pockets during the process, further adding speed. Even so, these solutions can screen for firearms, explosives, suicide vests or belts at a rate of about 900 people per hour through one lane.

Using AI, advanced screening systems enable people to walk through quickly and provide an automated decision but without creating a bottleneck. This volume greatly improves traffic flow while also improving the accuracy of detection and makes this technology suitable for additional facilities such as stadiums and other public venues such as Lincoln Center in New York City and the Oakland airport.

Apollo Shield’s anti-drone system.

So much for the land, what about the air?   Increasingly drones are being used as weapons. Famously, this was seen in a drone attack last year against Venezuelan president Nicolas Maduro. An airport drone incident drew widespread attention when a drone shut down Gatwick Airport in late 2018 inconveniency stranded tens of thousands of people.

People are rightly concerned about how easy it is to get a gun. Drones are also easy to acquire and operate, and quite difficult to monitor and to defend against. AI is now being deployed to prevent drone attacks, whether at airports, stadiums, or critical infrastructure. For example, new AI-powered radar technology is being used to detect, classify, monitor and safely capture drones identified as dangerous.

Additionally, these systems use can rapidly develop a map of the airspace and effectively create a security “dome” around specific venues or areas. These systems have an integration component to coordinate with on-the-ground security teams and first responders. Some even have a capture drone to incarcerate a suspicious drone. When a threatening drone is detected and classified by the system as dangerous, the capture drone is dispatched and nets the invading drone. The hunter then tows the targeted drone to a safe zone for the threat to be evaluated and if needed, destroyed.

While there is much dialogue about the potential risk of AI affecting our society, there is also a positive side to these technologies. Coupled with our best physical security approaches, AI can help prevent violent incidents.

Inside Voyage’s plan to deliver a driverless future

By Kirsten Korosec

In two years, Voyage has gone from a tiny self-driving car upstart spun out of Udacity to a company able to operate on 200 miles of roads in retirement communities.

Now, Voyage is on the verge of introducing a new vehicle that is critical to its mission of launching a truly driverless ride-hailing service. (Human safety drivers not included.)

This internal milestone, which Voyage CEO Oliver Cameron hinted at in a recent Medium post, went largely unnoticed. Voyage, after all, is just a 55-person speck of a startup in an industry, where the leading companies have amassed hundreds of engineers backed by war chests of $1 billion or more. Voyage has raised just $23.6 million from investors that include Khosla Ventures, CRV, Initialized Capital and the venture arm of Jaguar Land Rover.

Still, the die has yet to be cast in this burgeoning industry of autonomous vehicle technology. These are the middle-school years for autonomous vehicles — a time when size can be misinterpreted for maturity and change occurs in unpredictable bursts.

The upshot? It’s still unclear which companies will solve the technical and business puzzles of autonomous vehicles. There will be companies that successfully launch robotaxis and still fail to turn their service into a profitable commercial enterprise. And there will be operationally savvy companies that fail to develop and validate the technology to a point where human drivers can be removed.

Voyage wants to unlock both.

Crowded field

Last chance for early-bird tickets to TC Sessions: Enterprise 2019

By Emma Comeau

It’s down to the wire folks. Today’s the last day you can save $100 on your ticket to TC Sessions: Enterprise 2019, which takes place on September 5 at the Yerba Buena Center in San Francisco. The deadline expires in mere hours — at 11:59 p.m. (PT). Get the best possible price and buy your early-bird ticket right now.

We expect more than 1,000 attendees representing the enterprise software community’s best and brightest. We’re talking founders of companies in every stage and CIOs and systems architects from some of the biggest multinationals. And, of course, managing partners from the most influential venture and corporate investment firms.

Take a look at just some of the companies joining us for TC Sessions: Enterprise: Bain & Company, Box, Dell Technologies Capital, Google, Oracle, SAP and SoftBank. Let the networking begin!

You can expect a full day of main-stage interviews and panel discussions, plus break-out sessions and speaker Q&As. TechCrunch editors will dig into the big issues enterprise software companies face today along with emerging trends and technologies.

Data, for example, is a mighty hot topic, and you’ll hear a lot more about it during a session entitled, Innovation Break: Data – Who Owns It?: Enterprises have historically competed by being closed entities, keeping a closed architecture and innovating internally. When applying this closed approach to the hottest new commodity, data, it simply does not work anymore. But as enterprises, startups and public institutions open themselves up, how open is too open? Hear from leaders who explore data ownership and the questions that need to be answered before the data floodgates are opened. Sponsored by SAP .

If investment is on your mind, don’t miss the Investor Q&A. Some of greatest investors in enterprise will be on hand to answer your burning questions. Want to know more? Check out the full agenda.

Maximize your last day of early-bird buying power and take advantage of the group discount. Buy four or more tickets at once and save 20%. Here’s a bonus. Every ticket you buy to TC Sessions: Enterprise includes a free Expo Only pass to TechCrunch Disrupt SF on October 2-4.

It’s now o’clock startuppers. Your opportunity to save $100 on tickets to TC Sessions: Enterprise ends tonight at precisely 11:59 p.m. (PT). Buy your early-bird tickets now and join us in September!

Is your company interested in sponsoring or exhibiting at TC Sessions: Enterprise? Contact our sponsorship sales team by filling out this form.

Autonomous air mobility company EHang to deploy air shuttle service in Guangzhou

By Darrell Etherington

China’s EHang, a company focused on developing and deploying autonomous passenger and freight low-altitude vehicles, will build out its first operational network of air taxis and transports in Guangzhou. The company announced that the Chinese city would play host to its pilot location for a citywide deployment.

The pilot will focus on not only showing that a low-altitude, rotor-powered aircraft makes sense for use in cities, but that a whole network of them can operate autonomously in concert, controlled and monitored by a central traffic management hub that Ehang will develop together with the local Guangzhou government.

Ehang, which was chosen at the beginning of this year by China’s Civil Aviation Administration as the sole pilot company to be able to build out autonomous flying passenger vehicle services, has already demonstrated flights of its Ehang 184 vehicles carrying passengers in Vienna earlier this year, and ran a number of flights in Guangzhou in 2018 as well.

In addition to developing the air traffic control system to ensure that these operate safely as a fleet working in the air above city at the same time, Ehang will be working with Guangzhou to build out the infrastructure needed to operate the network. The plan for the pilot is to use the initial stages to continue to test out the vehicles, as well as the vertiports it’ll need to support their operation, and then it’ll work with commercial partners for good transportation first.

The benefits of such a network will be especially valuable for cities like Guangzhou, where rapid growth has led to plenty of traffic and high density at the ground level. It could also potentially have advantages over a network of autonomous cars or wheeled vehicles, since those still have to contend with ground traffic, pedestrians, cyclists and other vehicles in order to operate, while the low-altitude air above a city is more or less unoccupied.

Self-driving truck startup Kodiak Robotics begins deliveries in Texas

By Kirsten Korosec

A year after coming out of stealth mode with $40 million, self-driving truck startup Kodiak Robotics will begin making its first commercial deliveries in Texas.

Kodiak will open a new facility in North Texas to support it freight operations along with increased testing in the state. The commercial route

There are some caveats to the milestone. Kodiak’s self-driving trucks will have a human safety driver behind the wheel. And it’s unclear how significant this initial launch is; the company didn’t provide details on who its customers are or what it will be hauling.

Kodiak has eight autonomous trucks in its fleet, and according to the company it’s “growing quickly.”

Still, it does mark progress for such a young company, which co-founders Don Burnette and Paz Eshel say is due to its talented and experienced workforce. 

Burnette, who is CEO of Kodiak, was part of the Google self-driving project before leaving and co-founding Otto in early 2016, along with Anthony Levandowski, Lior Ron and Claire Delaunay. Uber would acquire Otto (and its co-founders). Burnette left Uber to launch Kodiak in April 2018 with Eshel, a former venture capitalist and now the startup’s COO.

In August 2018, the company announced it had raised $40 million in Series A financing led by Battery Ventures . CRV, Lightspeed Venture Partners and Tusk Ventures also participated in the round. Itzik Parnafes, a general partner at Battery Ventures, joined Kodiak’s board.

Kodiak is the latest autonomous vehicle company to test its technology in Texas. The state has become a magnet for autonomous vehicle startups, particularly those working on self-driving trucks. That’s largely due to the combination of a friendly regulatory environment and the state’s position as a logistics and transportation hub.

“As a region adding more than 1 million new residents each decade, it is important to develop a comprehensive strategy for the safe and reliable movement of people and goods,” Thomas Bamonte, senior program manager of Automated Vehicles for the North Central Texas Council of Governments, said in a statement. “Our policy officials on the Regional Transportation Council have been very forward-thinking in their recognition of technology as part of the answer, which is positioning our region as a leader in the automated vehicle industry.”

Self-driving truck startup TuSimple was awarded a contract this spring to complete five round trips, for a two-week pilot, hauling USPS trailers more than 1,000 miles between the postal service’s Phoenix and Dallas distribution centers. A safety engineer and driver will be on board throughout the pilot.

Other companies developing autonomous vehicle technology for trucks such as Embark and Starsky Robotics have also tested on Texas roads.

Quantum computing is coming to TC Sessions: Enterprise on Sept. 5

By Frederic Lardinois

Here at TechCrunch, we like to think about what’s next, and there are few technologies quite as exotic and futuristic as quantum computing. After what felt like decades of being “almost there,” we now have working quantum computers that are able to run basic algorithms, even if only for a very short time. As those times increase, we’ll slowly but surely get to the point where we can realize the full potential of quantum computing.

For our TechCrunch Sessions: Enterprise event in San Francisco on September 5, we’re bringing together some of the sharpest minds from some of the leading companies in quantum computing to talk about what this technology will mean for enterprises (p.s. early-bird ticket sales end this Friday). This could, after all, be one of those technologies where early movers will gain a massive advantage over their competitors. But how do you prepare yourself for this future today, while many aspects of quantum computing are still in development?

IBM’s quantum computer demonstrated at Disrupt SF 2018

Joining us onstage will be Microsoft’s Krysta Svore, who leads the company’s Quantum efforts; IBM’s Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort; and Jim Clark, the director of quantum hardware at Intel Labs.

That’s pretty much a Who’s Who of the current state of quantum computing, even though all of these companies are at different stages of their quantum journey. IBM already has working quantum computers, Intel has built a quantum processor and is investing heavily into the technology and Microsoft is trying a very different approach to the technology that may lead to a breakthrough in the long run but that is currently keeping it from having a working machine. In return, though, Microsoft has invested heavily into building the software tools for building quantum applications.

During the panel, we’ll discuss the current state of the industry, where quantum computing can already help enterprises today and what they can do to prepare for the future. The implications of this new technology also go well beyond faster computing (for some use cases); there are also the security issues that will arise once quantum computers become widely available and current encryption methodologies become easily breakable.

The early-bird ticket discount ends this Friday, August 9. Be sure to grab your tickets to get the max $100 savings before prices go up. If you’re a startup in the enterprise space, we still have some startup demo tables available! Each demo table comes with four tickets to the show and a high-visibility exhibit space to showcase your company to attendees — learn more here.

Optimus Ride’s Brooklyn self-driving shuttles begin picking up passengers this week

By Darrell Etherington

Self-driving startup Optimus Ride will become the first to operate a commercial self-driving service in the state of New York – in Brooklyn. But don’t expect these things to be contending with pedestrians, bike riders, taxis and cars on New York’s busiest roads; instead, they’ll be offering shuttle services within Brooklyn Navy Yards, a 300-acre private commercial development.

The Optimus Ride autonomous vehicles, which have six seats across three rows for passengers, and which also always have both a safety driver and another Optimus staff observer on board, at least for now, will offer service seven days a week, for free, running a service loop that will cover the entire complex. It includes a stop at a new ferry landing on-site, which means a lot of commuters should be able to pretty easily grab a seat in one for their last-mile needs.

Optimus Ride’s shuttles have been in operation in a number of different sites across the U.S., including in Boston, Virginia, California and Massachusetts.

The Brooklyn Navy Yards is a perfect environment for the service, since it plays host to some 10,000 workers, but also includes entirely private roads – which means Optimus Ride doesn’t need to worry about public road rules and regulations in deploying a commercial self-driving service.

May Mobility, an Ann Arbor-based startup also focused on low-speed autonomous shuttles, has deployed in partnership with some smaller cities and on defined bus route paths. The approach of both companies is similar, using relatively simple vehicle designs and serving low-volume ridership in areas where traffic and pedestrian patterns are relatively easy to anticipate.

Commercially viable, fully autonomous robotaxi service for dense urban areas is still a long, long way off – and definitely out of reach for startup and smaller companies in the near-term. Tackling commercial service in controlled environments on a smaller scale is a great way to build the business while bringing in revenue and offering actual value to paying customers at the same time.

DeepCode gets $4M to feed its AI-powered code review tool

By Natasha Lomas

DeepCode, a Swiss startup that’s using machine learning to automate code reviews, has closed a $4M seed round, led by European VC firm Earlybird, with participation from 3VC and existing investor btov Partners.

The founders described the platform as a sort of ‘Grammarly for coders’ when we chatted to them early last year. At the they were bootstapping. Now they’ve bagged their first venture capital to dial their efforts up.

DeepCode, which is spun-out of Swiss technical university ETH Zurich, says its code review AI is different because it doesn’t just pick up syntax mistakes but is able to determine the intent of the code because it processes millions of commits — giving it an overview that allows it to identify many more critical bugs and vulnerabilities than other tools.

“All of the static analysis and lint tools out there (there are hundreds of those) are providing similar code analysis services but without the deeper understanding of code, and mostly focusing on one language or specific languages,” says CEO and co-founder, Boris Paskalev, going on to name-check the likes of CA Technologies, Micro Focus (Fortify), Cast Software, and SonarSource as the main competitors DeepCode is targeting.

Its bot is free for enterprise teams of up to 30 developers, for open source software, and for educational use.

To use it developers connect DeepCode with their GitHub or Bitbucket accounts, with no configuration required. The bot will then immediately start reviewing each commit — picking up issues “in seconds”.  (You can see a demo of the code review tool here.)

“We do not disclose developer information but the number of Open Source Repositories that are using DeepCode have hundreds of thousands of total contributors,” Paskalev tells us when asked how many developers are using the tool now.

“We do not count rules per se as our AI Platform combines thousands of programming concepts, which if combined in individual rules will result in millions of separate rules,” he adds.

The seed funding will go on supporting additional integrations and more programming languages than the three currently supported (namely: Java, JavaScript, and Python); on improving the scope of code recommendations, and on expanding the team internationally.

Commenting in a statement, Christian Nagel, partner and co-founder of Earlybird, said: “For all industries and almost every business model, the performance and quality of coding has become key. DeepCode provides a platform that enhances the development capabilities of programmers. The team has a deep scientific understanding of code optimization and uses artificial intelligence to deliver the next breakthrough in software development.”

❌