FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

ULA tapped to launch Astrobotic’s lunar lander to the Moon in 2021

By Darrell Etherington

The United Launch Alliance (ULA) has been chosen to launch the lunar lander of one of the companies chosen by NASA for its commercial lunar payload program. ULA will deliver Astrobotic’s Peregrine lander to the Moon in 2021, the companies announced today.

Peregrine will fly aboard ULA’s Vulcan Centaur rocket, taking off from Space Launch Complex-41 at Cape Canaveral, and this will act as one of two required certification flights that ULA must do in order to qualify for USAF missions with the Vulcan Centaur.

Vulcan is ULA’s next-generation heavy lift launch vehicle, which is currently in development. The launch vehicle will inherit some technology from the Atlas V and Delta IV rockets, but the booster will be powered by Blue Origin BE-4 engines, and it’ll be able to carry larger payloads than either Atlas V or Delta IV Heavy.

Astrobotic has been chosen by NASA as one of its commercial payload providers for its ambitious program to return to the Moon and eventually establish a colony. The company has already signed up 16 customers for delivery on its first Moon mission, it said in a press release, which it will log onto the Peregrine, which can support up to 90kg (nearly 200 lbs) for its first mission.

NASA recently opened up a call for more companies to join Astrobotic and the eight other providers it chose last November for its lunar commercial payload program. These will all need launch providers, which represents more potential business for ULA, SpaceX and others looking to develop and launch vehicles capable of getting payloads to the Moon.

Postmates lands permit to test its Serve autonomous delivery robots in SF

By Darrell Etherington

Postmates has officially received the green light from the city of San Francisco to begin testing its Serve wheeled delivery robot on city streets, as first reported by the SF Chronicle and confirmed with Postmates by TechCrunch. The on-demand delivery company told us last week that it expected the issuance of the permit to come through shortly after a conditional approval, and that’s exactly what happened on Wednesday thes week.

The permit doesn’t cover the entire city – just a designated area of a number of blocks in and around Potrero Hill and the Inner Mission, but it will allow Postmates to begin testing up to three autonomous delivery robots at once, at speeds of up to 3 mph. Deliveries can only take place between 8 AM and 6:30 PM on weekdays, and a human has to be on hand within 30 feet of the vehicles while they’re operating at all times. Still, it’s a start, and green light for a city regulatory environment that has had a somewhat rocky start with some less collaborative early pilots from other companies.

Autonomous delivery bot company Marble also has a permit application pending with the city’s Public Works department, and will look to test its own four-wheeled, sensor-equipped rolling delivery bots within the city soon should it be granted similar testing approval.

Postmates first revealed Serve last December, taking a more anthropomorphic approach to the vehicle’s overall design. Like many short-distance delivery robots of its ilk, it includes a lockable cargo container and screen-based user interface for eventual autonomous deliveries to customers. The competitive field for autonomous rolling delivery bots is growing continuously, with companies like Starship Technologies, Amazon and many more throwing their hats in the ring.

These robo-shorts are the precursor to a true soft exoskeleton

By Devin Coldewey

When someone says “robotic exoskeleton,” the power loaders from Aliens are what come to mind for most people (or at least me), but the real things will be much different: softer, smarter, and used for much more ordinary tasks. The latest such exo from Harvard is so low-profile you could wear it around the house.

Designed by researchers at Harvard’s Wyss Institute (in collaboration with several other institutions), which focuses on soft robotics and bio-inspired mechanisms, the exosuit isn’t for heavy lifting or combating xenomorphs but simply walking and running a little bit more easily.

The suit, which is really more of a pair of shorts with a mechanism attached at the lower back and cables going to straps on the legs, is intended to simply assist the leg in its hip-extension movement, common to most forms of locomotion.

An onboard computer (and neural network, naturally) detects the movements of the wearer’s body and determines both the type of gait (walking or running) and what phase of that gait the leg is currently in. It gives the leg making the movement a little boost, making it just that much easier to do it.

In testing, the suit reduced the metabolic load of walking by 9.3 percent and running by 4 percent. That might not sound like much, but they weren’t looking to create an Olympic-quality cyborg — just show reliable gains from a soft, portable exosuit.

“While the metabolic reductions we found are modest, our study demonstrates that it is possible to have a portable wearable robot assist more than just a single activity, helping to pave the way for these systems to become ubiquitous in our lives,” said lead study author Conor Walsh in a news release.

The whole idea, then, is to leave behind the idea of an exosuit as a big mechanical thing for heavy industry or work, and bring in the idea that one could help an elderly person stand up from a chair, or someone recovering from an accident walk farther without fatigue.

rt scitoc aug16 r1

The whole device, shorts and all, weighs about 5 kilograms, or 11 pounds. Most of that is in the little battery and motor pack stashed at the top of the shorts, near the body’s center of mass, helping it feel lighter than it is.

Of course this is the kind of thing the military is very interested in — not just for active duty (a soldier who can run twice as far or fast) but for treatment of the wounded. So it shouldn’t be a surprise that this came out of a DARPA project initiated years ago (and ongoing in other forms).

But by far the more promising applications are civilian, in the medical field and beyond. “We are excited to continue to apply it to a range of applications, including assisting those with gait impairments, industry workers at risk of injury performing physically strenuous tasks, or recreational weekend warriors,” said Walsh.

Currently the team is hard at work improving the robo-shorts, reducing the weight, making the assistance more powerful and more intuitive, and so on. The paper describing their system was the cover story of this week’s edition of the journal Science.

Inside Voyage’s plan to deliver a driverless future

By Kirsten Korosec

In two years, Voyage has gone from a tiny self-driving car upstart spun out of Udacity to a company able to operate on 200 miles of roads in retirement communities.

Now, Voyage is on the verge of introducing a new vehicle that is critical to its mission of launching a truly driverless ride-hailing service. (Human safety drivers not included.)

This internal milestone, which Voyage CEO Oliver Cameron hinted at in a recent Medium post, went largely unnoticed. Voyage, after all, is just a 55-person speck of a startup in an industry, where the leading companies have amassed hundreds of engineers backed by war chests of $1 billion or more. Voyage has raised just $23.6 million from investors that include Khosla Ventures, CRV, Initialized Capital and the venture arm of Jaguar Land Rover.

Still, the die has yet to be cast in this burgeoning industry of autonomous vehicle technology. These are the middle-school years for autonomous vehicles — a time when size can be misinterpreted for maturity and change occurs in unpredictable bursts.

The upshot? It’s still unclear which companies will solve the technical and business puzzles of autonomous vehicles. There will be companies that successfully launch robotaxis and still fail to turn their service into a profitable commercial enterprise. And there will be operationally savvy companies that fail to develop and validate the technology to a point where human drivers can be removed.

Voyage wants to unlock both.

Crowded field

Autonomous air mobility company EHang to deploy air shuttle service in Guangzhou

By Darrell Etherington

China’s EHang, a company focused on developing and deploying autonomous passenger and freight low-altitude vehicles, will build out its first operational network of air taxis and transports in Guangzhou. The company announced that the Chinese city would play host to its pilot location for a citywide deployment.

The pilot will focus on not only showing that a low-altitude, rotor-powered aircraft makes sense for use in cities, but that a whole network of them can operate autonomously in concert, controlled and monitored by a central traffic management hub that Ehang will develop together with the local Guangzhou government.

Ehang, which was chosen at the beginning of this year by China’s Civil Aviation Administration as the sole pilot company to be able to build out autonomous flying passenger vehicle services, has already demonstrated flights of its Ehang 184 vehicles carrying passengers in Vienna earlier this year, and ran a number of flights in Guangzhou in 2018 as well.

In addition to developing the air traffic control system to ensure that these operate safely as a fleet working in the air above city at the same time, Ehang will be working with Guangzhou to build out the infrastructure needed to operate the network. The plan for the pilot is to use the initial stages to continue to test out the vehicles, as well as the vertiports it’ll need to support their operation, and then it’ll work with commercial partners for good transportation first.

The benefits of such a network will be especially valuable for cities like Guangzhou, where rapid growth has led to plenty of traffic and high density at the ground level. It could also potentially have advantages over a network of autonomous cars or wheeled vehicles, since those still have to contend with ground traffic, pedestrians, cyclists and other vehicles in order to operate, while the low-altitude air above a city is more or less unoccupied.

Postmates lands first-ever permit to test sidewalk delivery robots in San Francisco

By Kate Clark

On-demand delivery business Postmates says it’s been granted the first-ever permit for sidewalk robotics operations in the city of San Francisco.

According to San Francisco Public Works, the permits are active for 180 days and authorize the testing of up to three autonomous delivery devices. We’ve reached out to the Public Works department for comment.

Postmates has since 2017 been working alongside San Francisco supervisor Norman Yee and labor and advocacy groups to develop a framework for sidewalk robotics. The issuance of the permit makes San Francisco one of the first cities to formally allow companies to test autonomous delivery robots under a new pilot program.

Previously, companies were testing autonomous robots in various San Francisco streets sans permits, until the city voted to ban street robots from testing without official government permits, akin to the electric-scooter saga of 2018.

“We’ve been eager to work directly with cities to seek a collaborative and inclusive approach to robotic deployment that respects our public rights of way, includes community input, and allows cities to develop thoughtful regulatory regimes,” a representative of Postmates said in a statement provided to TechCrunch.

Postmates semi-autonomous sidewalk rover, Serve, was unveiled in December. Using cameras and lidar to navigate sidewalks, Serve can carry 50 pounds for up to 25 miles after one charge. Postmates has a human pilot remotely monitoring the Serve fleets and each rover has a “Help” button, touchscreen and video chat display for customers or passers-by to use if necessary. The company originally said they planned to roll out the bots in 2019, though no pilots have been officially announced yet.

serve on the sidewalk

Postmates semi-autonomous delivery robot, Serve

Postmates says they’ve made a number of changes to Serve in recent months, including implementing new lidar tech that’s smaller, more lightweight and durable, with zero-emission capabilities. Under Ken Kocienda, an Apple veteran that joined Postmates recently, the Serve team has also developed a new scripting language for animating Serve’s “eyes.”

“We are spending a lot of time going in and refining and inventing new ways that Serve can communicate,” Kocienda told TechCrunch in an interview earlier this year. “We want to make it socially intelligent. We want people, when they see Serve going down the street, to smile at it and to be happy to see it there.”

According to documents provided by Postmates, another autonomous delivery company, Marble, was not granted a permit after labor union Teamsters said the startup lacked an adequate Labor Dispute statement in its permit application. Marble is a last-mile logistics business based in San Francisco. Last year, the company closed a $10 million round with support from Tencent, CrunchFund and others.

Postmates, for its part, is expected to go public later this year in a highly anticipated initial public offering. The business filed confidentially for its offering in February after lining up a $100 million pre-IPO financing that valued the business at $1.85 billion. Postmates is said to be simultaneously exploring an M&A exit, according to Recode, which recently wrote that Posmates has discussed a merger with DoorDash, another top food delivery provider.

In June, Postmates announced Google’s vice president of finance, Kristin Reinke, had joined its board of directors, a sign it was sticking to IPO plans.

Postmates is backed by Tiger Global, BlackRock, Spark Capital, Uncork Capital, Founders Fund, Slow Ventures and others.

Toyota partners with AI startup Preferred Networks on building helper robots for humans

By Darrell Etherington

Toyota is enlisting the help of startup Preferred Networks, a Japanese company founded in 2014 with a focus on artificial intelligence and deep learning, to help move forward its goal of developing useful service robots that can assist people in everyday life.

The two companies announced a partnership today to collaborate on research and development that will use Toyota’s Human Support Robot (HSR) robotics platform. The platform, which Toyota originally created in 2012 and has been developing since, is a basic robot designed to be able to work alongside people in everyday settings. Its primary uses involve offering basic car and support assistance in nursing and long-term care applications. Equipped with one arm, a display, cameras and a wheeled base, it can collect and retrieve items, and provide remote control and communication capabilities.

Preferred Networks already has some experience with Toyota’s HSR – it demonstrated one-such robot programmed to clean a room fully autonomously at Japan’s CEATEC robotics conference in 2018. The system could identify objects, responsd to specific human instructions and, importably pick up and put down objects it couldn’t define from its database in a safe manner.vision predict 01Toyota will be providing “several dozen” HSR units to Preferred Networks for the startup to work on, and then over the next three years, the two will collaborate on R&D, sharing the results of their work and the resulting intellectual property, with no restrictions on how either party uses the results of the joint work.

One of Toyota’s guiding goals as a company is to develop commercial home robotics that can work with people where they live. The automaker has a number of different projects in the works to make this happen, including through research at its Toyota Research Institute (TRI) subsidiary which works with a number of academic institutions. Toyota also recently revealed a number of robotics projects its bringing to the 2020 Olympic Games in Tokyo, which will help it field test a number of its projects.

Self-driving truck startup Kodiak Robotics begins deliveries in Texas

By Kirsten Korosec

A year after coming out of stealth mode with $40 million, self-driving truck startup Kodiak Robotics will begin making its first commercial deliveries in Texas.

Kodiak will open a new facility in North Texas to support it freight operations along with increased testing in the state. The commercial route

There are some caveats to the milestone. Kodiak’s self-driving trucks will have a human safety driver behind the wheel. And it’s unclear how significant this initial launch is; the company didn’t provide details on who its customers are or what it will be hauling.

Kodiak has eight autonomous trucks in its fleet, and according to the company it’s “growing quickly.”

Still, it does mark progress for such a young company, which co-founders Don Burnette and Paz Eshel say is due to its talented and experienced workforce. 

Burnette, who is CEO of Kodiak, was part of the Google self-driving project before leaving and co-founding Otto in early 2016, along with Anthony Levandowski, Lior Ron and Claire Delaunay. Uber would acquire Otto (and its co-founders). Burnette left Uber to launch Kodiak in April 2018 with Eshel, a former venture capitalist and now the startup’s COO.

In August 2018, the company announced it had raised $40 million in Series A financing led by Battery Ventures . CRV, Lightspeed Venture Partners and Tusk Ventures also participated in the round. Itzik Parnafes, a general partner at Battery Ventures, joined Kodiak’s board.

Kodiak is the latest autonomous vehicle company to test its technology in Texas. The state has become a magnet for autonomous vehicle startups, particularly those working on self-driving trucks. That’s largely due to the combination of a friendly regulatory environment and the state’s position as a logistics and transportation hub.

“As a region adding more than 1 million new residents each decade, it is important to develop a comprehensive strategy for the safe and reliable movement of people and goods,” Thomas Bamonte, senior program manager of Automated Vehicles for the North Central Texas Council of Governments, said in a statement. “Our policy officials on the Regional Transportation Council have been very forward-thinking in their recognition of technology as part of the answer, which is positioning our region as a leader in the automated vehicle industry.”

Self-driving truck startup TuSimple was awarded a contract this spring to complete five round trips, for a two-week pilot, hauling USPS trailers more than 1,000 miles between the postal service’s Phoenix and Dallas distribution centers. A safety engineer and driver will be on board throughout the pilot.

Other companies developing autonomous vehicle technology for trucks such as Embark and Starsky Robotics have also tested on Texas roads.

Amazon Scout autonomuous delivery robots begin deliveries in California

By Darrell Etherington

Amazon’s Scout six-wheeled, sidewalk driving delivery robots have begun doing deliveries in Southern California, to customers in the Irvine area. Amazon announced this first California deployment of Scout bots in a blog post, noting that in its experience to date, the company has had plenty of opportunity to experience a range of weather conditions in its first deployments in the Pacific Northwest in Seattle – so weather-wise at least, the little blue bot should have a smoother time in sunny CA.

There are only a “small number” of the robots currently deployed, so even if you’re an Irvine resident, don’t necessarily expect to get a glimpse of one just yet. But they will be making their way to customer homes “during daylight hours,” Monday to Friday, per Amazon. They’ll be sent out at random for orders placed by customers through Amazon as usual, regardless of what delivery option you select.

While the robots can drive themselves around, which is the whole point of the project to begin with, for the time being they’ll be accompanied by an ‘Amazon Scout Ambassador .’ These Amazon staff are part diplomat, part research associate for the project, answering questions from people in the neighborhood and also taking note of their reactions. Robots aren’t yet actually interacting with people too much on a daily basis, especially out in the world, so a key part of rolling them out commercially is studying how people interact with them, and think about how those interactions might be altered or improved.

A lot of thought went into the initial Scout design, both in terms of making sure it’s able to survive the many miles it traverses during a day, and in coming up with a design that looks and feels at once approachable but also somewhat bland, so as to quickly evolve from novelty to standard neighborhood background scenery.

Optimus Ride’s Brooklyn self-driving shuttles begin picking up passengers this week

By Darrell Etherington

Self-driving startup Optimus Ride will become the first to operate a commercial self-driving service in the state of New York – in Brooklyn. But don’t expect these things to be contending with pedestrians, bike riders, taxis and cars on New York’s busiest roads; instead, they’ll be offering shuttle services within Brooklyn Navy Yards, a 300-acre private commercial development.

The Optimus Ride autonomous vehicles, which have six seats across three rows for passengers, and which also always have both a safety driver and another Optimus staff observer on board, at least for now, will offer service seven days a week, for free, running a service loop that will cover the entire complex. It includes a stop at a new ferry landing on-site, which means a lot of commuters should be able to pretty easily grab a seat in one for their last-mile needs.

Optimus Ride’s shuttles have been in operation in a number of different sites across the U.S., including in Boston, Virginia, California and Massachusetts.

The Brooklyn Navy Yards is a perfect environment for the service, since it plays host to some 10,000 workers, but also includes entirely private roads – which means Optimus Ride doesn’t need to worry about public road rules and regulations in deploying a commercial self-driving service.

May Mobility, an Ann Arbor-based startup also focused on low-speed autonomous shuttles, has deployed in partnership with some smaller cities and on defined bus route paths. The approach of both companies is similar, using relatively simple vehicle designs and serving low-volume ridership in areas where traffic and pedestrian patterns are relatively easy to anticipate.

Commercially viable, fully autonomous robotaxi service for dense urban areas is still a long, long way off – and definitely out of reach for startup and smaller companies in the near-term. Tackling commercial service in controlled environments on a smaller scale is a great way to build the business while bringing in revenue and offering actual value to paying customers at the same time.

Digitizing construction sites with Scaled Robotics

By Veanne Cao

Approximately 20% of construction costs are wrapped up in fixing errors. Barcelona-based Scaled Robotics wants to minimize the rework by automating progress monitoring with autonomous mobile robots. 

Leveraging lidar and autonomous vehicle technology (similar to what’s used by Google cars to map the world), Scaled Robotics built a WALL-E doppelgänger to navigate and build maps of construction sites by fusing images, video and data captured by its robots.

[gallery type="slideshow" columns="1" ids="1863476,1863475,1863474,1863473"]

Scaled Robotics was born out of frustration, “of not having the tools to build what we designed in the office,” says co-founder Stuart Maggs, whose background is in construction and architecture. “You spend a lot of time in the office, creating this vision of what you wanted, that you thought was right, but ultimately, it came down to a guy in the field with just a tape measure and a piece of chalk that will put things pretty much however he felt on that day.” 

Their robots have been deployed in various construction sites around the world, including Dura Vermeer in the Netherlands and Kia in the U.K. Maggs says he found it surprisingly easy to convince the construction industry of the robot’s value — arguing there’s a real need for what it delivers: High-resolution comparison of the digital model to the on-the-ground build site that helps build managers keep close track of progress and spot problems before they can scale into costly expenses. The bot is a multifaceted tool for efficiency, he adds. 

In the beginning, workers on site were a bit hesitant, but after numerous jokes and picture-taking, Maggs, said in closing, “they just accept it as another tool on the construction site.”

Dasha AI is calling so you don’t have to

By Natasha Lomas

While you’d be hard-pressed to find any startup not brimming with confidence over the disruptive idea they’re chasing, it’s not often you come across a young company as calmly convinced it’s engineering the future as Dasha AI.

The team is building a platform for designing human-like voice interactions to automate business processes. Put simply, it’s using AI to make machine voices a whole lot less robotic.

“What we definitely know is this will definitely happen,” says CEO and co-founder Vladislav Chernyshov. “Sooner or later the conversational AI/voice AI will replace people everywhere where the technology will allow. And it’s better for us to be the first mover than the last in this field.”

“In 2018 in the US alone there were 30 million people doing some kind of repetitive tasks over the phone. We can automate these jobs now or we are going to be able to automate it in two years,” he goes on. “If you multiple it with Europe and the massive call centers in India, Pakistan and the Philippines you will probably have something like close to 120M people worldwide… and they are all subject for disruption, potentially.”

The New York based startup has been operating in relative stealth up to now. But it’s breaking cover to talk to TechCrunch — announcing a $2M seed round, led by RTP Ventures and RTP Global: An early stage investor that’s backed the likes of Datadog and RingCentral. RTP’s venture arm, also based in NY, writes on its website that it prefers engineer-founded companies — that “solve big problems with technology”. “We like technology, not gimmicks,” the fund warns with added emphasis.

Dasha’s core tech right now includes what Chernyshov describes as “a human-level, voice-first conversation modelling engine”; a hybrid text-to-speech engine which he says enables it to model speech disfluencies (aka, the ums and ahs, pitch changes etc that characterize human chatter); plus “a fast and accurate” real-time voice activity detection algorithm which detects speech in under 100 milliseconds, meaning the AI can turn-take and handle interruptions in the conversation flow. The platform can also detect a caller’s gender — a feature that can be useful for healthcare use-cases, for example.

Another component Chernyshov flags is “an end-to-end pipeline for semi-supervised learning” — so it can retrain the models in real time “and fix mistakes as they go” — until Dasha hits the claimed “human-level” conversational capability for each business process niche. (To be clear, the AI cannot adapt its speech to an interlocutor in real-time — as human speakers naturally shift their accents closer to bridge any dialect gap — but Chernyshov suggests it’s on the roadmap.)

“For instance, we can start with 70% correct conversations and then gradually improve the model up to say 95% of correct conversations,” he says of the learning element, though he admits there are a lot of variables that can impact error rates — not least the call environment itself. Even cutting edge AI is going to struggle with a bad line.

The platform also has an open API so customers can plug the conversation AI into their existing systems — be it telephony, Salesforce software or a developer environment, such as Microsoft Visual Studio.

Currently they’re focused on English, though Chernyshov says the architecture is “basically language agnostic” — but does requires “a big amount of data”.

The next step will be to open up the dev platform to enterprise customers, beyond the initial 20 beta testers, which include companies in the banking, healthcare and insurance sectors — with a release slated for later this year or Q1 2020.

Test use-cases so far include banks using the conversation engine for brand loyalty management to run customer satisfaction surveys that can turnaround negative feedback by fast-tracking a response to a bad rating — by providing (human) customer support agents with an automated categorization of the complaint so they can follow up more quickly. “This usually leads to a wow effect,” says Chernyshov.

Ultimately, he believes there will be two or three major AI platforms globally providing businesses with an automated, customizable conversational layer — sweeping away the patchwork of chatbots currently filling in the gap. And of course Dasha intends their ‘Digital Assistant Super Human Alike’ to be one of those few.

“There is clearly no platform [yet],” he says. “Five years from now this will sound very weird that all companies now are trying to build something. Because in five years it will be obvious — why do you need all this stuff? Just take Dasha and build what you want.”

“This reminds me of the situation in the 1980s when it was obvious that the personal computers are here to stay because they give you an unfair competitive advantage,” he continues. “All large enterprise customers all over the world… were building their own operating systems, they were writing software from scratch, constantly reinventing the wheel just in order to be able to create this spreadsheet for their accountants.

“And then Microsoft with MS-DOS came in… and everything else is history.”

That’s not all they’re building, either. Dasha’s seed financing will be put towards launching a consumer-facing product atop its b2b platform to automate the screening of recorded message robocalls. So, basically, they’re building a robot assistant that can talk to — and put off — other machines on humans’ behalf.

Which does kind of suggest the AI-fuelled future will entail an awful lot of robots talking to each other… 🤖🤖🤖

Chernyshov says this b2c call screening app will most likely be free. But then if your core tech looks set to massively accelerate a non-human caller phenomenon that many consumers already see as a terrible plague on their time and mind then providing free relief — in the form of a counter AI — seems the very least you should do.

Not that Dasha can be accused of causing the robocaller plague, of course. Recorded messages hooked up to call systems have been spamming people with unsolicited calls for far longer than the startup has existed.

Dasha’s PR notes Americans were hit with 26.3BN robocalls in 2018 alone — up “a whopping” 46% on 2017.

Its conversation engine, meanwhile, has only made some 3M calls to date, clocking its first call with a human in January 2017. But the goal from here on in is to scale fast. “We plan to aggressively grow the company and the technology so we can continue to provide the best voice conversational AI to a market which we estimate to exceed $30BN worldwide,” runs a line from its PR.

After the developer platform launch, Chernyshov says the next step will be to open up access to business process owners by letting them automate existing call workflows without needing to be able to code (they’ll just need an analytic grasp of the process, he says).

Later — pegged for 2022 on the current roadmap — will be the launch of “the platform with zero learning curve”, as he puts it. “You will teach Dasha new models just like typing in a natural language and teaching it like you can teach any new team member on your team,” he explains. “Adding a new case will actually look like a word editor — when you’re just describing how you want this AI to work.”

His prediction is that a majority — circa 60% — of all major cases that business face — “like dispatching, like probably upsales, cross sales, some kind of support etc, all those cases” — will be able to be automated “just like typing in a natural language”.

So if Dasha’s AI-fuelled vision of voice-based business process automation come to fruition then humans getting orders of magnitude more calls from machines looks inevitable — as machine learning supercharges artificial speech by making it sound slicker, act smarter and seem, well, almost human.

But perhaps a savvier generation of voice AIs will also help manage the ‘robocaller’ plague by offering advanced call screening? And as non-human voice tech marches on from dumb recorded messages to chatbot-style AIs running on scripted rails to — as Dasha pitches it — fully responsive, emoting, even emotion-sensitive conversation engines that can slip right under the human radar maybe the robocaller problem will eat itself? I mean, if you didn’t even realize you were talking to a robot how are you going to get annoyed about it?

Dasha claims 96.3% of the people who talk to its AI “think it’s human”, though it’s not clear what sample size the claim is based on. (To my ear there are definite ‘tells’ in the current demos on its website. But in a cold-call scenario it’s not hard to imagine the AI passing, if someone’s not paying much attention.)

The alternative scenario, in a future infested with unsolicited machine calls, is that all smartphone OSes add kill switches, such as the one in iOS 13 — which lets people silence calls from unknown numbers.

And/or more humans simply never pick up phone calls unless they know who’s on the end of the line.

So it’s really doubly savvy of Dasha to create an AI capable of managing robot calls — meaning it’s building its own fallback — a piece of software willing to chat to its AI in future, even if actual humans refuse.

Dasha’s robocall screener app, which is slated for release in early 2020, will also be spammer-agnostic — in that it’ll be able to handle and divert human salespeople too, as well as robots. After all, a spammer is a spammer.

“Probably it is the time for somebody to step in and ‘don’t be evil’,” says Chernyshov, echoing Google’s old motto, albeit perhaps not entirely reassuringly given the phrase’s lapsed history — as we talk about the team’s approach to ecosystem development and how machine-to-machine chat might overtake human voice calls.

“At some point in the future we will be talking to various robots much more than we probably talk to each other — because you will have some kind of human-like robots at your house,” he predicts. “Your doctor, gardener, warehouse worker, they all will be robots at some point.”

The logic at work here is that if resistance to an AI-powered Cambrian Explosion of machine speech is futile, it’s better to be at the cutting edge, building the most human-like robots — and making the robots at least sound like they care.

Dasha’s conversational quirks certainly can’t be called a gimmick. Even if the team’s close attention to mimicking the vocal flourishes of human speech — the disfluencies, the ums and ahs, the pitch and tonal changes for emphasis and emotion — might seem so at first airing.

In one of the demos on its website you can hear a clip of a very chipper-sounding male voice, who identifies himself as “John from Acme Dental”, taking an appointment call from a female (human), and smoothly dealing with multiple interruptions and time/date changes as she changes her mind. Before, finally, dealing with a flat cancelation.

A human receptionist might well have got mad that the caller essentially just wasted their time. Not John, though. Oh no. He ends the call as cheerily as he began, signing off with an emphatic: “Thank you! And have a really nice day. Bye!”

If the ultimate goal is Turing Test levels of realism in artificial speech — i.e. a conversation engine so human-like it can pass as human to a human ear — you do have to be able to reproduce, with precision timing, the verbal baggage that’s wrapped around everything humans say to each other.

This tonal layer does essential emotional labor in the business of communication, shading and highlighting words in a way that can adapt or even entirely transform their meaning. It’s an integral part of how we communicate. And thus a common stumbling block for robots.

So if the mission is to power a revolution in artificial speech that humans won’t hate and reject then engineering full spectrum nuance is just as important a piece of work as having an amazing speech recognition engine. A chatbot that can’t do all that is really the gimmick.

Chernyshov claims Dasha’s conversation engine is “at least several times better and more complex than [Google] Dialogflow, [Amazon] Lex, [Microsoft] Luis or [IBM] Watson”, dropping a laundry list of rival speech engines into the conversation.

He argues none are on a par with what Dasha is being designed to do.

The difference is the “voice-first modelling engine”. “All those [rival engines] were built from scratch with a focus on chatbots — on text,” he says, couching modelling voice conversation “on a human level” as much more complex than the more limited chatbot-approach — and hence what makes Dasha special and superior.

“Imagination is the limit. What we are trying to build is an ultimate voice conversation AI platform so you can model any kind of voice interaction between two or more human beings.”

Google did demo its own stuttering voice AI — Duplex — last year, when it also took flak for a public demo in which it appeared not to have told restaurant staff up front they were going to be talking to a robot.

Chernyshov isn’t worried about Duplex, though, saying it’s a product, not a platform.

“Google recently tried to headhunt one of our developers,” he adds, pausing for effect. “But they failed.”

He says Dasha’s engineering staff make up more than half (28) its total headcount (48), and include two doctorates of science; three PhDs; five PhD students; and ten masters of science in computer science.

It has an R&D office in Russian which Chernyshov says helps makes the funding go further.

“More than 16 people, including myself, are ACM ICPC finalists or semi finalists,” he adds — likening the competition to “an Olympic game but for programmers”. A recent hire — chief research scientist, Dr Alexander Dyakonov — is both a doctor of science professor and former Kaggle No.1 GrandMaster in machine learning. So with in-house AI talent like that you can see why Google, uh, came calling…

Dasha

 

But why not have Dasha ID itself as a robot by default? On that Chernyshov says the platform is flexible — which means disclosure can be added. But in markets where it isn’t a legal requirement the door is being left open for ‘John’ to slip cheerily by. Bladerunner here we come.

The team’s driving conviction is that emphasis on modelling human-like speech will, down the line, allow their AI to deliver universally fluid and natural machine-human speech interactions which in turn open up all sorts of expansive and powerful possibilities for embeddable next-gen voice interfaces. Ones that are much more interesting than the current crop of gadget talkies.

This is where you could raid sci-fi/pop culture for inspiration. Such as Kitt, the dryly witty talking car from the 1980s TV series Knight Rider. Or, to throw in a British TV reference, Holly the self-depreciating yet sardonic human-faced computer in Red Dwarf. (Or indeed Kryten the guilt-ridden android butler.) Chernyshov’s suggestion is to imagine Dasha embedded in a Boston Dynamics robot. But surely no one wants to hear those crawling nightmares scream…

Dasha’s five-year+ roadmap includes the eyebrow-raising ambition to evolve the technology to achieve “a general conversational AI”. “This is a science fiction at this point. It’s a general conversational AI, and only at this point you will be able to pass the whole Turing Test,” he says of that aim.

“Because we have a human level speech recognition, we have human level speech synthesis, we have generative non-rule based behavior, and this is all the parts of this general conversational AI. And I think that we can we can — and scientific society — we can achieve this together in like 2024 or something like that.

“Then the next step, in 2025, this is like autonomous AI — embeddable in any device or a robot. And hopefully by 2025 these devices will be available on the market.”

Of course the team is still dreaming distance away from that AI wonderland/dystopia (depending on your perspective) — even if it’s date-stamped on the roadmap.

But if a conversational engine ends up in command of the full range of human speech — quirks, quibbles and all — then designing a voice AI may come to be thought of as akin to designing a TV character or cartoon personality. So very far from what we currently associate with the word ‘robotic’. (And wouldn’t it be funny if the term ‘robotic’ came to mean ‘hyper entertaining’ or even ‘especially empathetic’ thanks to advances in AI.)

Let’s not get carried away though.

In the meanwhile, there are ‘uncanny valley’ pitfalls of speech disconnect to navigate if the tone being (artificially) struck hits a false note. (And, on that front, if you didn’t know ‘John from Acme Dental’ was a robot you’d be forgiven for misreading his chipper sign off to a total time waster as pure sarcasm. But an AI can’t appreciate irony. Not yet anyway.)

Nor can robots appreciate the difference between ethical and unethical verbal communication they’re being instructed to carry out. Sales calls can easily cross the line into spam. And what about even more dystopic uses for a conversation engine that’s so slick it can convince the vast majority of people it’s human — like fraud, identity theft, even election interference… the potential misuses could be terrible and scale endlessly.

Although if you straight out ask Dasha whether it’s a robot Chernyshov says it has been programmed to confess to being artificial. So it won’t tell you a barefaced lie.

Dasha

How will the team prevent problematic uses of such a powerful technology?

“We have an ethics framework and when we will be releasing the platform we will implement a real-time monitoring system that will monitor potential abuse or scams, and also it will ensure people are not being called too often,” he says. “This is very important. That we understand that this kind of technology can be potentially probably dangerous.”

“At the first stage we are not going to release it to all the public. We are going to release it in a closed alpha or beta. And we will be curating the companies that are going in to explore all the possible problems and prevent them from being massive problems,” he adds. “Our machine learning team are developing those algorithms for detecting abuse, spam and other use cases that we would like to prevent.”

There’s also the issue of verbal ‘deepfakes’ to consider. Especially as Chernyshov suggests the platform will, in time, support cloning a voiceprint for use in the conversation — opening the door to making fake calls in someone else’s voice. Which sounds like a dream come true for scammers of all stripes. Or a way to really supercharge your top performing salesperson.

Safe to say, the counter technologies — and thoughtful regulation — are going to be very important.

There’s little doubt that AI will be regulated. In Europe policymakers have tasked themselves with coming up with a framework for ethical AI. And in the coming years policymakers in many countries will be trying to figure out how to put guardrails on a technology class that, in the consumer sphere, has already demonstrated its wrecking-ball potential — with the automated acceleration of spam, misinformation and political disinformation on social media platforms.

“We have to understand that at some point this kind of technologies will be definitely regulated by the state all over the world. And we as a platform we must comply with all of these requirements,” agrees Chernyshov, suggesting machine learning will also be able to identify whether a speaker is human or not — and that an official caller status could be baked into a telephony protocol so people aren’t left in the dark on the ‘bot or not’ question. 

“It should be human-friendly. Don’t be evil, right?”

Asked whether he considers what will happen to the people working in call centers whose jobs will be disrupted by AI, Chernyshov is quick with the stock answer — that new technologies create jobs too, saying that’s been true right throughout human history. Though he concedes there may be a lag — while the old world catches up to the new.

Time and tide wait for no human, even when the change sounds increasingly like we do.

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

By Neesha A. Tambe

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

NASA calls for more companies to join its commercial lunar lander program

By Darrell Etherington

NASA has opened up a call for companies to join the ranks of its nine existing Commercial Lunar Payload Services (CLPS) providers, a group it chose in November after a similar solicitation for proposals. With the CLPS program, NASA is buying space aboard future commercial lunar landers to deliver to the surface of the Moon its future research, science and demonstration projects, and it’s looking for more providers to sign up as lunar lander providers. Contracts could prove out to $2.6 billion and extend through 2028.

The list of nine providers chosen in November 2018 includes Astrobotic Technology, Deep Space Systems, Draper, Firefly Aerospace, Intuitive Machines, Lockheed Martin, Masten Space Systems, Moon Express and OrbitBeyond. NASA is looking to these companies, and whoever ends up added to a list as a result of this second call for submissions, to bring both small and mid-size lunar landers, with the aim of delivering anything from rovers, to batteries, to payloads specific to future Artemis missions with the aim of helping establish a more permanent human presence on the Moon.

NASA’s goal in building out a stable of providers helps its Moon ambitions in a few different ways, including providing redundancy, and also offering a competitive field so that they can open up bids for specific payloads and gain price advantages.

At the end of May, NASA announced the award of over $250 million in contracts for specific payload delivery missions that were intended to take place by 2021. The three companies chosen from its list of nine providers were Astrobotic, Intuitive Machines and OrbitBeyond – OrbitBeyond told the agency just yesterday, however, that it would not be able to fulfill the contract awarded due to “internal corporate challenges” and backed out of the contract with NASA’s permission.

Given how quickly one of their providers exited one of the few contracts already awarded, and the likely significant demand there will be for commercial lander services should NASA’s Artemis ambitions even match up somewhat closely to the vision, it’s probably a good idea for the agency to build out that stable of service providers.

Can robots find a home in the classroom?

By David Riggs
Jason Palmer Contributor
Jason Palmer is a general partner at New Markets Venture Partners. He previously served as deputy director for Higher Education at the Bill & Melinda Gates Foundation.

A few years ago, investors heralded the arrival of a future with robots in the home. Robots like Jibo, Anki’s Cozmo and Mayfield Robotics’ Kuri attracted buzz and hundreds of millions of dollars in venture capital. All three companies have since shut down, prompting Kidtech expert Robin Raskin to recently ask, “Has the sheen worn off the tech toy world?”

With the demise of these robots and their makers, it’s fair to wonder if and when there will be a time when robots have a real place in our lives. But some robots are finding a home in a counterintuitive place: schools.

Because for robots to succeed, they need to find an application that integrates with human needs — solving real problems — and sustains their use. At home, the current wave of robots may provide children with a few hours of entertainment before they are tossed aside like any other new toy.

In schools, however, robots are proving that they can serve a purpose, bridging the divide between the digital and physical worlds in ways that bring to life concepts like coding. Savvy teachers are finding that robots can help to bring project-based learning alive in ways that supports development of valuable critical thinking and problem-solving skills.

It would not be the first time that K-12 schools paved the way as early adopters of technology. Forty years ago, the Apple II was widely adopted in schools first, before desktop computers colonized the home. Laptops famously gained early momentum in schools, where their light weight and portability were tightly aligned with the rise of in-class interventions and digital content. Schools were also early adopters of tablets, which, despite a few high-profile failures, are now seemingly ubiquitous in K-12 classrooms.

The rise of robotics in K-12 schools has been buoyed by not just intrigue with the potential of new gadgets, but an increased focus on computer science education. Just a decade ago, only a few states allowed computer science to count toward STEM course requirements. Today, nearly every state allows computer science courses to fulfill core graduation requirements, and 17 states require that every high school offer computer science.

The growing importance of computer science at the high school level has, in turn, trickled down to elementary and middle schools, where teachers are turning to robots as an effective way to introduce students to states’ new K-12 computer science standards. In California, the state’s board of education now suggests that schools use robots to satisfy five of its standards.

Educators are recognizing the potential of robots, not as toys, but as powerful tools for learning.

From a design level, classroom robots are fundamentally different than those at home. Learning necessitates that — instead of bite-sized, shallow experiences, robots must provide experiences that have the depth and variety needed to keep students engaged over months and years. To succeed in the classroom, they must be accompanied by thoughtful curricular content that teachers can incorporate into their instruction. Because robots are relatively expensive, teachers need robots they can reliably use for a long time.

It’s a trend that hasn’t been lost on companies like littleBits and Sphero, which are quickly pivoting to focus on a K-12 market dominated by legacy players like Lego. Wonder Workshop robots, which gained popularity through retail channels like the Apple Store and Amazon, are now being used in more than 20,000 schools across the world. Although they currently penetrate just a fraction of the K-5 classrooms in the U.S., their success is not only drawing increased interest from investors, but fueling innovations that could have implications for pernicious equity gaps that still plague STEM classrooms — and high-tech fields.

While the toy industry has long marketed its products differently to boys and girls in ways that actually reinforce stereotypes through product design and advertising, robots designed for the classroom must appeal to all students. Earlier versions of Wonder Workshop’s Dash robot, for example, rolled around on visible wheels.

During its initial user studies, the company learned students equated wheeled robots with cars and trucks. In other words, they viewed Dash as something meant for boys. So, Wonder Workshop covered up Dash’s wheels. It worked. Today, nearly 50% of participants in the company’s Wonder League Robotics Competition are girls, with many of the winning teams each year being all-girl teams.

So while the national narrative often imagines a dystopian future where robots come for our jobs, classroom robots are actually helping teachers meet the needs of increasingly diverse classrooms. They are helping students improve their executive function, creativity and ability to communicate with others.

Educators are recognizing the potential of robots, not as toys, but as powerful tools for learning. And children as young as kindergarten are using robots to better and more quickly understand mathematical concepts. Students who have the opportunity to learn from — and with — robots in the classroom today may develop a generation of robots that can play a role in our lives well into the future. They will grow up not merely as consumers of technology, but creators of it.

Ford acquires mobile robotics company Quantum Signal to help with self-driving

By Darrell Etherington

Ford has acquired a small robotics company based in Michigan called Quantum Signal, which has produced mobile robots for a number of clients, including the U.S. military. The company’s specialty has been building remote control software for robotic vehicles, specifically, and it’s also responsible for a very highly regarded simulated testing and development environment for autonomous and remotely controlled robotic systems.

All of the above is useful not only when developing military robots, but also when setting out to build and deploy self-driving cars — hence Ford’s interest in acquiring Quantum Signal. Ford said in a blog post that while others might’ve been sleeping on Quantum Signal and the work it has done, it has been following the company closely, and will employ its experience in developing real-time simulation and algorithms related to autonomous vehicle control systems to help build out Ford’s self-driving vehicles, transportation-as-a-service platform and hardware and software related to both.

Reading between the lines here, it sounds like Ford’s main interest was in picking up some experienced talent working on autonomy, and very specific challenges that are needed to develop road-worthy self-driving vehicles, including perception systems and virtual testing environments. Ford does, however, explicitly lay out a desire to “preserve” Quantum’s own “unique culture” as it brings the company on-board, pointing out that that’s the course it took with similar acquisition SAIPS (an Israeli computer vision and machine learning company) when it brought that team on-board in 2016.

SAIPS has now more than doubled its team to 30 people, and relocated to a new headquarters in Tel Aviv, with a specific focus among its latest hires on bringing in specialists in reinforcement learning. Ford has also invested in Argo AI, taking a majority stake in the startup initially in 2017 and then re-upping with a joint investment with Volkswagen in July of this year in a deal that makes both major equal shareholders. Ford is happy to both acquire and partner in its pursuit of self-driving tech development, and this probably won’t be the last similar deal we see made en route to actually deploying autonomous vehicles on roads for any major automaker.

Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI

By Darrell Etherington

Alphabet’s autonomous driving and robotaxi company Waymo does a lot of training in order to refine and improve the artificial intelligence that powers its self-driving software. Recently, it teamed up with fellow Alphabet company and AI specialist DeepMind to develop new training methods that would help make its training better and more efficient.

The two worked together to bring a training method called Population Based Training (PBT for short) to bear on Waymo’s challenge of building better virtual drivers, and the results were impressive — DeepMind says in a blog post that using PBT decreased by 24% false positives in a network that identifies and places boxes around pedestrians, bicyclists and motorcyclists spotted by a Waymo vehicle’s many sensors. Not only that, but is also resulted in savings in terms of both training time and resources, using about 50% of both compared to standard methods that Waymo was using previously.

To step back a little, let’s look at what PBT even is. Basically, it’s a method of training that takes its cues from how Darwinian evolution works. Neural nets essentially work by trying something and then measuring those results against some kind of standard to see if their attempt is more “right” or more “wrong” based on the desired outcome. In the training methods that Waymo was using, they’d have multiple neural nets working independently on the same task, all with varied degrees of what’s known as a “learning rate,” or the degree to which they can deviate in their approach each time they attempt a task (like identifying objects in an image, for instance). A higher learning rate means much more variety in terms of the quality of the outcome, but that swings both ways — a lower learning rate means much steadier progress, but a low likelihood of getting big positive jumps in performance.

But all that comparative training requires a huge amount of resources, and sorting the good from the bad in terms of which are working out relies on either the gut feeling of individual engineers, or massive-scale search with a manual component involved where engineers “weed out” the worst performing neural nets to free up processing capabilities for better ones.

What DeepMind and Waymo did with this experiment was essentially automate that weeding, automatically killing the “bad” training and replacing them with better-performing spin-offs of the best-in-class networks running the task. That’s where evolution comes in, since it’s kind of a process of artificial natural selection. Yes, that does make sense — read it again.

In order to avoid potential pitfalls with this method, DeepMind tweaked some aspects after early research, including evaluating models on fast, 15-minute intervals, building out strong validation criteria and example sets to ensure that tests really were building better-performing neural nets for the real world, and not just good pattern-recognition engines for the specific data they’d been fed.

Finally, the companies also developed a sort of “island population” approach by building sub-populations of neural nets that only competed with one another in limited groups, similar to how animal populations cut off from larger groups (i.e. limited to islands) develop far different and sometimes better-adapted characteristics versus their large land-mass cousins.

Overall, it’s a super interesting look at how deep learning and artificial intelligence can have a real impact on technology that already is, in some cases, and will soon be even much more, involved in our daily lives.

Postmates’ self-driving delivery rover will see with Ouster’s lidar

By Kirsten Korosec

Postmates’ cooler-inspired autonomous delivery robot, which will roll out commercially in Los Angeles later this year, will rely on lidar sensors from Ouster, a burgeoning two-year-old startup that recently raised $60 million in equity and debt funding.

Postmates unveiled the first generation of its self-described “autonomous rover” — known as Serve — late last year. The vehicle uses cameras and light detection and ranging sensors called lidar to navigate sidewalks, as well as a backup human who remotely monitors the rover and can take control if needed.

A new second-generation version made its debut onstage earlier this month at Fortune’s Brainstorm Tech event. This newer version looks identical to the original version except a few minor details, including a change in lidar sensors. The previous version was outfitted with sensors from Velodyne, a company that has long dominated the lidar industry.

The supplier contract is notable for Ouster, a startup trying to carve out market share from the giant Velodyne and stand out from a global pack of lidar companies that now numbers close to 70. And it could prove substantial for the company if Postmates takes Serve to other cities as planned.

Lidar measures distance using laser light to generate highly accurate 3D maps of the world around the car. It’s considered by most in the self-driving car industry a key piece of technology required to safely deploy robotaxis and other autonomous vehicles.

Ouster’s strategy has been to cast a wider net for customers by selling its lidar sensors to other industries, including robotics, drones, mapping, defense, building security, mining and agriculture companies. It’s an approach that Waymo is also pursuing for its custom lidar sensors, which will be sold to companies outside of self-driving cars. Waymo will initially target robotics, security and agricultural technology.

Ouster’s business model, along with its tech, has helped it land 437 customers to date and raise a total of $90 million.

The contract with Postmates is its first major customer announcement. COAST Autonomous announced earlier this week that it was using Ouster sensors for its a low-speed autonomous shuttles. Self-driving truck companies Kodiak and Ike Robotics have also been using the sensors this year.

Ouster, which has 125 employees, uses complementary metal-oxide-semiconductor (CMOS) technology in its OS1 sensors, the same tech found in consumer digital cameras and smartphones. The company has announced four lidar sensors to date, with resolutions from 16 to 128 channels, and two product lines, the OS-1 and OS-2.

Udelv partners with H-E-B on Texas autonomous grocery delivery pilot

By Darrell Etherington

Autonomous delivery company Udelv has signed yet another partner to launch a new pilot of its self-driving goods delivery service: Texas-based supermarket chain H-E-B Group. The pilot will provide service to customers in Olmos Park, just outside of downtown San Antonio where the grocery retailer is based.

California-based Udelv will provide H-E-B with one of its Newton second-generation autonomous delivery vehicles, which are already in service in trials in the Bay Area, Arizona and Houston providing deliveries on behalf of some of Udelv’s other clients, which include Walmart among others.

Udelv CEO and founder Daniel Laury explained in an interview that they’re very excited to be partnering with H-E-B, because of the company’s reach in Texas, where it’s the largest grocery chain with approximately 400 stores. This initial phase only covers one car and one store, and during this part of the pilot the vehicle will have a safety driver on board. But the plan includes the option to expand the partnership to cover more vehicles and eventually achieve full driverless operation.

“They’re really at the forefront of technology, in the areas where they need to be,” Laury said. “It’s a very impressive company.”

For its part, H-E-B Group has been in discussion with a number of potential partners for autonomous deliver trials, and according to Paul Tepfenhart, SVP of Omnichannel and Emerging Technologies at H-E-B, but it liked Udelv specifically because of their safety record, and because they didn’t just come in with a set plan and a fully formed off-the-shelf offering – they truly partnered with HEB on what the final deployment of the pilot would look like.

Both Tepfenhart and Laury emphasized the importance of customer experience in providing autonomous solutions, and Laury noted that he thinks Udelv’s unique advantage in the increasingly competitive autonomous curbside delivery business is its attention to the robotics of the actual delivery and storage components of its custom vehicle.

“The reason I think we’re we’ve been so successful, is because we focused a lot on the delivery robotics,” Laury explained. “If you think about it, there’s no autonomous delivery business that works if you don’t have the robotics aspect of it figured out also. You can have an autonomous vehicle, but if you don’t have an automated cargo space where merchants can load [their goods] and consumers can unload the vehicle by themselves, you have no business.”

Udelv also thinks that it has an advantage when it comes to its business model, which aims to generate revenue now, in exchange for providing actual value to paying customers, rather than counting on being supported entirely through funding from a wealthy investor or deep-pocketed corporate partners. Laury likens it to Tesla’s approach, where it actually has over 500,000 vehicles on the road helping it build its autonomous technology – but all of those are operated by paying customers who get all the benefits of owing their cars today.

“We want to be the Tesla of autonomous delivery,” Laury said. “If you think about it, Tesla has got 500,000 vehicles on the road […] if you think about this, for of all the the cars in the world that have some level of automated driver assistance (ADAS) or autonomy, I think Tesla’s 90% of them – and they get the customers to pay a ridiculous amount of money for that. Everybody else in the business is getting funding from something else. Waymo is getting funding from search; Cruise is getting funding from GM and SoftBank and others, Nuro is getting funding from SoftBank. So, pretty much everybody else is getting funding from a source that’s a different source from the actual business they’re supposed to be in.”

Laury says that Udelv’s unique strength is in the ability the company has to provide value to partners like HEB today, through its focus on robotics and solving problems like engineering the robotics of the loading and customer pick-up experience, which puts it in a unique place where it can fund its own research through revenue-generating services that can be offered in-market now, rather than ten years from now.

Where May Mobility’s self-driving shuttles might show up next

By Kirsten Korosec

May Mobility might be operating low-speed self-driving shuttles in three U.S. cities, but its founders don’t view this as just another startup racing to deploy autonomous vehicle technology.

They describe the Ann Arbor-based company as a transportation service provider. As May Mobility’s co-founder and COO Alisyn Malek told TechCrunch, they’re in the “business of moving people.” Autonomous vehicle technology is just the “killer feature” to help them do that. 

TechCrunch recently spent the day with May Mobility in Detroit, where it first launched, to get a closer look at its operations, learn where it might be headed next and why companies in the industry are starting to back off previously ambitious timelines.

Malek will elaborate on what markets are most appealing to May Mobility while onstage at TC Sessions: Mobility on July 10 in San Jose. Malek will join Lia Theodosiou-Pisanelli, head of partner product and programs at Aurora, to talk about what product makes the most sense for autonomous vehicle technology.

❌