Hyundai has signed a memorandum of understanding (MOU) with the city of Seoul to begin testing six autonomous vehicles on roads in the Gangnam district beginning next month, BusinessKorea reports. The arrangement specifies that six vehicles will begin testing on 23 roads in December. Looking ahead to 2021, there will be as many as 15 of the cars, which are hydrogen fuel cell electric vehicles, testing on the roads.
Seoul will provide smart infrastructure to communicate with the vehicles, including connected traffic signals, and will also relay traffic and other info as frequently as every 0.1 seconds to the Hyundai vehicles. That kind of real-time information flow should help considerably with providing the visibility necessary to optimize safe operation of the autonomous test cars. On the Hyundai said, they’ll be sharing information too — providing data around the self-driving test that will be freely available to schools and other organizations looking to test their own self-driving technology within the city.
Together, Seoul and Hyundai hope to use the partnership to build out a world-leading downtown self-driving technology deployment, and to have that evolve into a commercial service, complete with dedicated autonomous vehicle manufacture by 2024.
Alphabet subsidiary X, which is the former Google X, focuses exclusively on ambitious “moonshots,” or applications of tech you might expect are science fiction, not a real product in development. Like a robot that can sort through office trash.
X does a lot of its work more quietly than other Alphabet companies — until it’s ready to share some of its progress. It has reached that point with the Everyday Robot Project, an ongoing effort that X has been working on for “the past couple of years,” according to project lead Hans Peter Brondmo, who in a Medium post today shed some light on what the project is and what it does.
Brondo compares robotics today to computing in the 1950s and 60s — it’s a working reality, but it’s happening in dedicated spaces and the only people interacting with them on the regular are specially trained computer operators, using them for professional purposes. The challenge, then, is to usher in an era of robotics akin to the era of consumer computing — in other words, how do we get to a world where ordinary people live and interact with robots every day?
The challenges are both more mundane and more complex than you might imagine: They have everything to do with stuff we take for granted every day, like other people walking around, trash bins that are out at the curb one day and gone the next, furniture that moves around, different weather conditions and just about anything you can think of that’s a pretty normal part of everyday life but hard to predict exactly day-to-day. Robots work best with specificity and exactness, especially when it comes to programming.
The Everyday Robot Project knew this, and quickly determined that to create robots that are genuinely useful to actual people going about their lives, the key was to “teach” rather than “program,” according to Brondmo. That meant working with the team at Google AI, first in a lab setting, and then out in the world. That’s where it arrived at the robot it’s detailing today: One it successfully taught to sort through garbage at X’s own offices.
The robot, trained via simulation and reinforcement learning, among other techniques, managed to actually reduce the level of waste contamination (putting the wrong garage in the wrong place and causing the whole contents of that bin to go to the landfill instead of being recycled, for instance) from around 20% to less than 5. If you’ve ever worked in a building that is certified as green by some kind of officially recognized standard, then you might know how impressive this actually is in terms of overall impact.
Aside from actually making a significant dent in the amount of unneeded waste heading to a landfill from a sizeable office, this development helps X prove out some of the feasibility of its ultimate goal of making robots everyday affairs for most people. There’s still a long ways to go before robots are commonplace companions — the smartphones we carry around everywhere, in the general computing analogy — but this is a step in that direction.
MIT researchers have developed a new way to optimize how soft robots perform specific tasks — a huge challenge when it comes to soft robotics in particular, because robots with flexible bodies can basically move in an infinite number of ways at any given moment, so programming them to do something in the best way possible is a monumental task.
To make the whole process easier and less computationally intensive, the research team has developed a way to take what is effectively a robot that can move in infinite possible dimensions and simplify it to a representative “low-dimensional” model that can accurately be used to optimize movement, based on environmental physics and the natural ways that soft objects shaped like any individual soft robot is actually most likely to bend in a giving setting.
So far, the MIT team behind this has demonstrated it in simulation only, but in this simulated environment it has seen significant improvements in terms of both speed and accuracy of programmed movement of robots versus methods used today that are more complex. In fact, across a number of tests of simulated robots with both 2D and 3D designs, and two and four-legged physical designs, the researchers were able to show that optimizations that would normally task as many as 30,000 simulations to achieve were instead possible in just 400.
Why is any of this even important? Because it basically shrinks drastically the amount of computational overhead required to get good movement results out of soft robots, which is a key ingredient in helping make them partial to actually use in real-life applications. If programming a soft robot to do something genuinely useful like navigate and effect an underwater damage assessment and repair requires huge amounts of processing power, and significant actual time, it’s not really viable for anyone to actually deploy.
In the future, the research team hopes to bring their optimization method out of simulation and into real-world testing, as well as full-scale development of soft robots from start to finish.
Robotics and AI is the hottest scientific mashup since The Big Bang Theory’s Sheldon Cooper met Amy Farrah Fowler. If you play a role in these world-changing technologies, join us at TC Sessions: Robotics & AI on March 3, 2020 at UC Berkeley’s Zellerbach Hall. What could be better than spending an entire day focused on melding minds with machines?
Well, how about exhibiting your early-stage startup to 1,500 of the world’s leading robotics and AI technologists, researchers, innovators and investors? It’s easy. Buy an Early-Stage Startup Exhibitor Package. The price includes four tickets, a 30-inch round highboy table, power, linen and a tabletop sign. Exhibitor space is limited, and we have only 11 tables left. Don’t miss this opportunity to showcase your work to people with the power to change the trajectory of your early-stage startup.
Want even more spotlight opportunity? Of course, you do. This year, in addition to interviews, panel discussions, speakers, breakout sessions and Q&As, we’re adding a pitch competition. Founders of any early-stage startup focused on robotics and AI can participate. It’s free, and all you need to do is apply here by February 1.
TechCrunch will review all applications and select 10 startups to pitch at a private event on March 2. You’ll pitch to TechCrunch editors, main-stage speakers and industry experts. We’ll have a panel of VC judges there to narrow the field to five finalists. The following day, those teams will take to the Main Stage at TC Sessions: Robotics + AI and pitch to the attending masses.
Whether you exhibit or pitch — why not do both? — you’ll expose your startup to the top leaders and investors in robotics and AI. Opportunity’s knocking and it’s up to you to kick down the door.
The next TC Sessions: Robotics & AI takes place on March 3, 2020 at UC Berkeley. Get your business in front of the people who can help you achieve your startup dreams. Buy your Early-Stage Startup Exhibitor Package today.
Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.
Exploring a distant moon usually means trundling around its uniquely inhospitable surface, but on icy ocean moons like Saturn’s Enceladus, it might be better to come at things from the bottom up. This rover soon to be tested in Antarctica could one day roll along the underside of a miles-thick ice crust in the ocean of a strange world.
It is thought that these oceanic moons may be the most likely on which to find signs of life past or present. But exploring them is no easy task.
Little is known about these moons, and the missions we have planned are very much for surveying the surface, not penetrating their deepest secrets. But if we’re ever to know what’s going on under the miles of ice (water or other) we’ll need something that can survive and move around down there.
The Buoyant Rover for Under-Ice Exploration, or BRUIE, is a robotic exploration platform under development at the Jet Propulsion Laboratory in Pasadena. It looks a bit like an industrial-strength hoverboard (remember those?), and as you might guess from its name, it cruises around the ice upside-down by making itself sufficiently buoyant to give its wheels traction.
“We’ve found that life often lives at interfaces, both the sea bottom and the ice-water interface at the top. Most submersibles have a challenging time investigating this area, as ocean currents might cause them to crash, or they would waste too much power maintaining position,” explained BRUIE’s lead engineer, Andy Klesh, in a JPL blog post.
Unlike ordinary submersibles, though, this one would be able to stay in one place and even temporarily shut down while maintaining its position, waking only to take measurements. That could immensely extend its operational duration.
While the San Fernando Valley is a great analog for many dusty, sun-scorched extraterrestrial environments, it doesn’t really have anything like an ice-encrusted ocean to test in. So the team went to Antarctica.
The project has been in development since 2012, and has been tested in Alaska (pictured up top) and the Arctic. But the Antarctic is the ideal place to test extended deployment — ultimately for up to months at a time. Try that where the sea ice retreats to within a few miles of the pole.
Testing of the rover’s potential scientific instruments is also in order, since in a situation where we’re looking for signs of life, accuracy and precision are paramount.
JPL’s techs will be supported by the Australian Antarctic Program, which maintains Casey station, from which the mission will be based.
Brava had a lot of things working in its favor as startups go. It was founded in 2015 by serial executive John Pleasants, whose past stints have included as co-president of Disney Interactive Media Group, COO of Electronic Arts, and CEO of Ticketmaster.
His plans to create a suite of snazzy direct-to-consumer line of smart hardware and software products, beginning with the Brava oven, also attracted tens of millions of dollars from an impressive line-up of backers, including True Ventures, TPG Growth, and Lightspeed Venture Partners, among others. Indeed, though some sophisticated kitchen devices have come and gone (Juicero), some liked what Pleasants and his growing team in Redwood City, Ca., were trying to cook up, and one of these admirers, apparently, was the Middleby Corporation, a publicly traded commercial and residential cooking and industrial process equipment company in Illinois that just acquired Brava — though neither Brava nor Middleby is disclosing terms of the deal.
We were in touch via email yesterday with both Pleasants and the CEO of Middleby, Tim FitzGerald, to learn what they can share about the tie-up, as well as to ask what happens to Brava and its tens of employees now.
TC: This was a young company. Why turn around and sell it?
JP: The company itself is four years-old and we’ve had product available in market for one year. We’ve been venture funded to date and had the option to continue raising growth capital or merge with Middleby Corporation. Brava’s mission has always been to enable everyone to cook delicious, healthy home-cooked food with minimal time and effort, and we believe the fastest way to achieve this bold goal is through a strategic partnership with someone who can help make that happen.
TC: How did Brava and Middleby come together? Who brokered the first conversation? Was Brava talking with anyone else?
JP: We’ve been in talks with many people about financing, and a select group of strategics about a deeper partnership to achieve our objective. We had the assistance of City Capital in the process, and they made the introduction to Middleby in Chicago.
TC: How much is Middleby paying for the company? Also, is this an all-cash deal?
JP: While not disclosing the total amount, the consideration includes a mix of cash and stock
TC: So what’s next? Will Middleby retain the Brava name or will this be phased out over time?
JP: Brava as it’s known today will not only continue but see accelerated growth and expansion. We will continue to sell the product and support our customers under the Brava brand while further innovating new products and services for our customers.
TF: The Brava name will remain. The product and technology will enhance our existing residential and commercial kitchen appliance portfolio. In Middleby Residential, we manufacture and sell Viking Range and other well-known consumer brands.
TC: How many people does Brava currently employ and how many if any are going to Middleby?
JP: Brava employs 38 people and all will be going to Middleby. I will remain as the CEO of Brava and will also work with other Middleby divisional leaders to leverage Brava’s light cooking platform and services for their existing brands. We’re excited by this because we currently have many ideas and plans for leveraging the Brava technology across new form factors, business segments (residential and commercial) and geographies. This all becomes more feasible with Middleby.
TC: We last talked before the Brava oven was out in the world. How many units did you wind up selling?
JP: We’re closing in on 5,000 customers and expect to have a big holiday.
TC: What were some of the lessons learned with this experience?
JP: People love it. You can see this every day throughout our online communities. It’s not just about the quality of food and the ease in creating it . . . we hear comments all the time about how spouses who hardly ever cooked now do, how kids who never liked vegetables now ask for more . . .
In terms of what people want that doesn’t currently exist, [I’d say] more recipes and programs (we have thousands, but there are so many more we can do) and more flexibility; we can uniquely cook multiple ingredients simultaneously to perfection with our light-cooking technology and this enables lots of fun combinations [but] our customers would like even more flexibility in mixing and matching ingredients.
TC: Any business lessons?
JP: In terms of business lessons, it’s challenging to explain Brava’s full value proposition in a quick ad on social media. We have revolutionary technology that enables a new way of cooking that’s better, easier, faster — and that sounds almost too good to be true.
TC: Do you think the market for smart cooking appliance is big enough at this point? What do you think are the remaining hurdles and how do consumers get past them?
JP: The “smart cooking appliance” market is in its infancy. There are still very few pioneers in the space and household penetration is negligible. But this is all about to change. Once people know someone who can personally attest to the benefits, I fundamentally believe the adoption curve will bend exponentially. People spend a lot of money on household appliances…once they can be “smart” and “chef powered” and deliver well against that promise, why would most people not want a “smart” one versus a “non-smart” one?
TF: We see this market growing significantly with the next generation [of home cooks] who currently rely on and demand a digital experience.
A commons tactic in both amateur and professional sports – and even in competitions as mundane as a casual board game night – is trash talk. But the negative effect of trash talk may have less to do with the skill of the repartee involved, and more with just the fact that it’s happening at all. A new study conducted by researchers at Carnegie Mellon University suggests that even robots spitting out pretty lame pre-programmed insults can have a negative impact on human players.
CMU’s study involved programming one of SoftBank’s Pepper humanoid robots to deliver scorchers like “I have to say you are a terrible player” to a group of 40 participants, who were playing the robot in a game called “Guards and Treasures,” which is a version of a strategy game often used for studying rationality. During the course of the experiment, participants played 35 times against the robot – some getting bolstering, positive comments form the robot, while others were laden with negative criticism.
Both groups of participants improved at the game over time – but the ones getting derided by the bots didn’t score as highly as the group that was praised.
It’s pretty well-established that people excel when they receive encouragement from other – but that’s generally meant other humans. This study provides early evidence that people could get similar benefits from robotic companions – even ones that don’t look particularly human-like. The researchers still want to do more investigation into whether Pepper’s humanoid appearance affected the outcome, vs. say a featureless box or an industrial robot acting as the automaton opponent and doling out the same kind of feedback.
The results of this and related research could be hugely applicable to areas like at-home care, something companies including Toyota are pursuing to address the needs of an aging population. It could also come into play in automated training applications, both at work and in other settings like professional sports.
Picnic, a robotics startup that focuses on food production, announced today that it has raised $5 million in additional seed funding. The new round was led by Creative Ventures, with participation from Flying Fish Partners and Vulcan Capital.
The company also said it has hired Kennard Nielsen, a product engineer who worked on the first four Kindle Fire tablets, Nike Fuelband, Microsoft Xbox and Doppler Labs’ HereOne from Doppler Labs at previous positions, as its new vice president of engineering.
The new funding will be used for product development, hiring and marketing.
Picnic is known for an automated pizza assembly system that launched in October. The configurable, modular platform currently focuses on high-volume pizza production and can reach rates of up to 180 18-inch pizzas or 300 12-inch pizzas an hour. The system fits into existing kitchen layouts, including food trucks and kiosks, and integrates with Picnic’s software to provide backend data and cloud analytics that help with consistency, speed and reducing food waste.
Picnic operates on a “robotics-as-a-service” model, with users paying for the system on a subscription basis. The pizza assembly system’s first customers were Centerplate, a food and hospitality provider for large event venues, and Washington-based restaurant chain Zaucer Pizza.
In June, Picnic also hired Mike McLaughlin, a food and beverage industry veteran who previously held roles at BUNN, Concordia Coffee Systems and Starbucks, as its vice president of product.
This holiday season, we’re going to be looking back at some of the best tech of the past year, and providing fresh reviews in a sort of ‘greatest hits’ across a range of categories. First up: iRobot’s top-end home cleaning robots, the Roomba s9+ robot vacuum, and the Braava m6 robot mop and floor sweeper. Both of these represent the current peak of iRobot’s technology, and while that shows up in the price tag, it also shows up in performance.
The iRobot Roomba S9+ is actually two things: The Roomba S9, which is available separately, and the Clean Base that enables the vacuum to empty itself after a run, giving you many cleanings before it needs you to actually open up a bin or replace a bag. Both the vacuum and its base are WiFi-connected, and controllable via iRobot’s app, as well as Google Assistant and Alexa. Combined, it’s the most advanced autonomous home vacuum you can get, and it manages to outperform a lot of older or less sophisticated robot vacuums even in situations that have historically been hard for this kind of tech to handle.
Like the Roomba S7 before it (which is still available and still also a great vacuum, for a bit less money), the S9 uses what’s called SLAM (Simultaneous Localization and Mapping), and a specific variant of that called vSLAM (the stands for ‘visual’). This technology means that as it works, it’s generating and adapting a map of your home to ensure that it can clean more effectively and efficiently.
After either a few dedicated training runs (which you can opt to send the vacuum on when it’s learning a new space) or a few more active vacuum runs, the Roomba S9 will remember your home’s layout, and provide a map that you can customize with room dividers and labels. This then turns on the vacuum’s real smart superpowers, which include being able to vacuum just specific rooms on command, as well as features like letting it easily pick up where it left off if it needs to return to its charging station mid-run. With the S9 and its large battery, the vacuum can do an entire run of my large two-bedroom condo on a single charge (the i7 I used previously needed two charges to finish up).
The S9’s vSLAM and navigation systems seem incredibly well-developed in my use: I’ve never once had the vacuum become stuck, or confused by changes in floor colouring, even going from a very light to a very dark floor (this is something that past vacuums have had difficulty with). It infallibly finds its way back to the Clean Base, and also never seems to be flummoxed by even drastic changes in lighting over the course of the day.
So it’s smart, but does it suck? Yes, it does – in the best possible way. Just like it doesn’t require stops to charge up, it also manages to clean my entire space with just one bin. There’s a lot more room in here thanks to the new design, and it handles even my dog’s hair with ease (my dog sheds a lot, and it’s very obvious light hair against dark wood floors). The new angled design on the front of the vacuum means it does a better job with getting in corners than previous fully round designs, and that shows, because corners are were clumps of hair go to gather in a dog-friendly household.
The ‘+’ in the S9+ is that Clean Base as I mentioned – think of it like the tower of lazy cleanliness. The base has a port that sucks dirt from the S9 when it’s done a run, shooting it into a bag in the top of the tower that can hold up to 30 full bins of dirt. That ends up being a lot in practice – it should last you months, depending on house size. Replacement bags cost $20 for three, which is probably what you’ll go through in a year, so it’s really a negligible cost for the convenience you’re getting.
The Roomba S9’s best friend, if you will, is the Braava m6. This is iRobot’s latest and greatest smart mop, which is exactly what it sounds like: Whereas Roomba vacuums, the Braava uses either single use disposable, or microfibre washable/reusable pads, as well as iRobot’s own cleaning fluid, to clean hardwood, tile, vinyl, cork and other hard surface floors once the vacuuming is done. It can also just run a dry sweep, which is useful for picking up dust and pet hair, as a finishing touch on the vacuum’s run.
iRobot has used its unique position in offering both of these types of smart devices to have them work together – if you have both the S9 and the Braava m6 added to your iRobot Home app, you’ll get an option to mop the floors right after the vacuum job is complete. It’s an amazing convenience feature, and one that works fairly well – but there are some differences in the smarts powering the Braava m6 and the Roomba s9 that lead to some occasional challenges.
The Braava m6 doesn’t seem to be quite as capable when it comes to mapping and navigating its surroundings. My condo layout is relatively simple, all one level with no drops or gaps. But the m6 has encountered some scenarios where it doesn’t seem to be able to cross a threshold or make sense of all floor types. Based on error messages, it seems like it’s identifying some surfaces as ‘cliffs’ or steep drops when transitioning back from lighter floors to darker ones.
What this means in practice is that a couple of times per run, I have to reposition the Braava manually. There are ways to solve for this, however, built into the software: Thanks to the smart mapping feature, I can just direct the Braava to focus only on the rooms with dark hardwood, or I can just adjust it when I get an alert that it’s having difficulty. It’s still massively more convenient than mopping by hand, and typically the m6 does about 90 percent of the apartment before it runs into difficult in one of these few small trouble areas.
If you’ve read online customer reviews fo the m6, you may also have seen complaints that it can leave tire marks on dark floors. I found that to be true – but with a few caveats. They definitely aren’t as pronounced as I expected based on some of the negative reviews out there, and I have very dark floors. They also only are really visible in direct sunlight, and then only faintly. They also fade pretty quickly, which means you won’t notice them most of the time if you’re mopping only once ever few vacuum runs. In the end, it’s something to be aware of, but for me it’s not a dealbreaker – far from it. The m6 still does a fantastic job overall of mopping and sweeping, and saves me a ton of labor on what is normally a pretty back-hostile manual task.
These iRobot home cleaning gadgets are definitely high-end, with the s9 starting at $1,099.99 ($1,399.99 with the cleaning base) and the m6 staring at $499.99. You can get a bundle with both staring at $1439.98, but even that is still a lot for cleaning appliances. This is definitely a case where the ‘you get what you pay for’ maxim proves true, however. Either rate s9+ alone, or the combo of the vacuum and mop represent a huge convenience, especially when used on a daily or similar regular schedule, vs. doing the same thing manually. The s9 also frankly does a better job than I ever could wth my own manual vacuum, since it’s much better at getting into corners, under couches, and cleaning along and under trip thanks to its spinning brush. And asking Alexa to have Roomba start a cleaning run feels like living in the future in the best possible way.
NASA has added five companies to the list of vendors that are cleared to bid on contracts for the agency’s Commercial Lunar Payload Services (CLPS) program. This list, which already includes nine companies from a previous selection process, now adds SpaceX, Blue Origin, Ceres Robotics, Sierra Nevada Corporation and Tyvak Nano-Satellite Systems. All of these companies can now place bids on NASA payload delivery to the lunar surface.
This basically means that these companies (which join Astrobotic Technology, Deep Space Systems, Draper Laboratory, Firefly Aerospace, Intuitive Machines, Lockheed Martin Space, Masten Space Systems, Moon Express and OrbitBeyond) can build and fly lunar landers in service of NASA missions. They’ll compete with one another for these contracts, which will involve lunar surface deliveries of resources and supplies to support NASA’s Artemis program missions, the first major goal of which is to return humans to the surface of the Moon by 2024.
These providers are specifically chosen to support delivery of heavier payloads, including “rovers, power sources, science experiments” and more, like the NASA VIPER (Volatiles Investigating Polar Exploration Rover), which is hunting water on the Moon. All of these will be used both to establish a permanent presence on the lunar surface for astronautics to live and work from, as well as key research that needs to be completed to make getting and staying there a viable reality.
NASA has chosen to contract out rides to the Moon instead of running its own as a way to gain cost and speed advantages, and it hopes that these providers will be able to also ferry commercial payloads on the same rides as its own equipment to further defray the overall price tag. The companies will bid on these contracts, worth up to $2.6 billion through November 2028 in total, and NASA will select a vendor for each based on cost, technical feasibility and when they can make it happen.
Blue Origin founder Jeff Bezos announced at this year’s annual International Astronautical Congress that it would be partnering with Draper, as well as Lockheed Martin and Northrop Grumman, for an end-to-end lunar landing system. SpaceX, meanwhile, revealed that it will be targeting a lunar landing of its next spacecraft, the Starship, as early as 2022 in an effort to help set the stage for the 2024-targeted Artemis landing.
SpaceX is going to launch a payload for client Nanoracks aboard one of its new rideshare missions, currently targeting late 2020, that will demonstrate a very ambitious piece of tech from the commercial space station company. Nanoracks is sending up a payload platform that will show off how it can use a robot to cut material very similar to the upper stages used in orbital spacecraft – something Nanoracks wants to eventually due to help convert these spent and discarded stages (sometimes called ‘space tugs’ because they generally move payloads from one area of orbit to another) into orbital research stations, habitats and more.
The demonstration mission is part of Nanoracks ‘Space Outpost Program,’ which aims to address the future need for in-space orbital commercial platforms by also simultaneously making use of existing vehicles and materials designed specifically for space. Through use of the upper stages of spacecraft left behind in orbit, the company hopes to show how it one day might be able to greatly reduce the costs of setting up in-space stations and habitats, broadening the potential access of these kinds of facilities for commercial space companies.
This will be the first ever demonstration of structural metal cutting in space, provided the demo goes as planned, and it could be a key technology not just for establishing more permanent research families in Earth’s orbit, but also for setting up infrastructure to help us get to, and stay at other interstellar destinations like the Moon and Mars.
Nanoracks has a track record of delivering when it comes to space station technology: It’s the first company to own and operate its own hardware on the International Space Station, and it’s accomplished a lot since its founding in 2009. This demo mission is also funded via a contract in place with NASA.
Also going up on the same mission is a payload of eight Spire LEMUR-2 CubeSats, which Naorakcs bordered on behalf o the global satellite operator. That late 2020 date is subject to change, as are most of the long-tail SpaceX missions, but whenever it takes place it’ll be a key moment in commercial space history to watch.
On March 3 next year, TechCrunch will host the fourth annual TC Sessions: Robotics + AI at UC Berkeley’s Zellerbach Hall. This time around we’re adding a new twist to the incredible line-up of speakers, breakout sessions and Q&As: a pitch-off for early-stage companies in the robotics and AI space.
How it works: The night before the event, 10 startups, chosen through an online application process, will pitch at a private event with TechCrunch editors, main-stage speakers and industry experts. A panel of VC judges will select the top five teams to then pitch the next day on the main stage at TC Sessions: Robotics + AI.
It is a once in a lifetime opportunity for founders to get their company in front of the tier-one leaders and investors in the industry, as well as receive video coverage on TechCrunch. We expect 1,500 attendees at the show and tens of thousands online.
Extra treat: Each of the 10 startup team finalists will receive two free tickets to attend the show the next day.
Apply here by February 1. TechCrunch will review applications and notify companies by February 15 so the founders have time to prepare. So, what are you waiting for? Get some spotlight!
Not interested in the pitch-off but want to attend this fantastic, show? Grab your Early-Bird pass here before it’s too late!
The problem of how to find the potential treasure trove hidden in millions of pounds of trash is getting a high-tech answer as investors funnel $16 million into the recycling robots built by Denver-based AMP Robotics.
For recyclers, the commercialization of robots tackling industry problems couldn’t come at a better time. Their once-stable business has been turned on its head by trade wars and low unemployment.
Recycling businesses used to be able to rely on China to buy up any waste stream (no matter the quality of the material). However, about two years ago, China decided it would no longer serve as the world’s garbage dump and put strict standards in place for the kinds of raw materials it would be willing to receive from other countries. The result has been higher costs at recycling facilities, which actually are now required to sort their garbage more effectively.
At the same time, low unemployment rates are putting the squeeze on labor availability at facilities where humans are basically required to hand-sort garbage into recyclable materials and trash.
Given the economic reality, recyclers are turning to AMP’s technology — a combination of computer vision, machine learning and robotic automation to improve efficiencies at their facilities.
Photo courtesy of Flickr/Abulla Al Muhairi
That’s what attracted Sequoia Capital to lead the company’s latest investment round — a $16 million Series A investment the company will use to expand its manufacturing capacity and boost growth as it looks to expand into international markets.
“We are excited to partner with AMP because their technology is changing the economics of the recycling
industry,” said Shaun Maguire, partner at Sequoia, in a statement. “Over the last few years, the industry has had their margins squeezed by labor shortages and low commodity prices. The end result is an industry proactively searching for cost-saving alternatives and added opportunities to increase revenue by capturing more high-value recyclables, and AMP is emerging as the leading solution.”
The funding will be used to “broaden the scope of what we’re going after,” says chief executive Matanya Horowitz. Beyond reducing sorting costs and improving the quality of the materials that recycling facilities can ship to buyers, the company’s computer vision technologies can actually help identify branded packaging and be used by companies to improve their own product life cycle management.
“We can identify… whether it’s a Coke or Pepsi can or a Starbucks cup,” says Horowitz. “So that people can help design their product for circularity… we’re building out our reporting capabilities and that, to them, is something that is of high interest.”
That combination of robotics, computer vision and machine learning has potential applications beyond the recycling industry as well, according to Horowitz. Automotive scrap and construction waste are other areas where the company has seen interest for its combination of software and hardware.
Meanwhile, the core business of recycling is picking up. In October, the company completed the installation of 14 robots at Single Stream Recyclers in Florida. It’s the largest single deployment of robots in the recycling industry and the robots, which can sort and pick twice as fast as people with higher degrees of accuracy, are installed at sorting lines for plastics, cartons, fiber and metals, the company said.
AMP’s business has two separate revenue streams — a robotics as a service offering and a direct sales option — and the company has made other installations at sites in California, Colorado, Indiana, Minnesota, New York, Pennsylvania, Texas, Virginia and Wisconsin.
The traction the company is seeing in its core business was validating for early investors like BV, Closed Loop Partners, Congruent Ventures and Sidewalk Infrastructure Partners, the Alphabet subsidiary’s new spin-out that invests in technologies to support new infrastructure projects.
For Mike DeLucia, the Sidewalk Infrastructure Partners principal who led the company’s investment into AMP Robotics, the deal is indicative of where his firm will look to commit capital going forward.
“It’s a technology that enables physical assets to operate more efficiently,” he says. “Our goal is to find the technologies that enable really exciting infrastructure projects, back them and work with them to deliver projects in the physical world.”
Investors like DeLucia and Abe Yokell, from the investment firm Congruent Ventures, think that recycling is just the beginning. Applications abound for AMP Robotic’s machine learning and computer vision technologies in areas far beyond the recycling center.
“When you think about how technology is able to impact the built environment, one area is machine vision,” says Yokell. “[Machine learning] neural nets can apply to real-world environments, and that stuff has gotten cheaper and easier to deploy.”
Rocket launch startup Rocket Lab is all about building out rapid-response space-launch capabilities, and founder/CEO Peter Beck is showing off its latest advancement in service of that goal: A room-sized manufacturing robot named “Rosie.”
Rosie is tasked with processing the carbon composite components of Rocket Lab’s Electron launch vehicle. That translates to basically getting the rocket flight-ready, and there’s a lot involved in that — it’s a process that normally can take “hundreds of hours,” according to Beck. So how fast can Rosie manage the same task?
“We can produce one launch vehicle in this machine every 12 hours,” Beck says in the video. That includes “every bit of marking, every bit of machining, every bit of drilling,” he adds.
Meet Rosie. She processes Electron's composite stages in just 12 hours. pic.twitter.com/NcC34Ylg66
— Rocket Lab (@RocketLab) November 13, 2019
This key new automation tool essentially takes something that was highly bespoke and manual and turns it into something eminently repeatable and expedited, which is a necessary ingredient if Rocket Lab is ever to accomplish its goal of providing high-frequency launches to small satellite customers with very little turnaround time. The company’s New Zealand launch facility recently landed an FAA license that helps sketch out the extent of its ambition, as it’s technically cleared to launch rockets as often as every 72 hours.
In addition to innovations like Rosie, Rocket Lab uses 3D printing for components of its launch vehicle engines that result in single-day turnaround for production, versus weeks using more traditional methods. It’s also now working on an ambitious plan for rocket recovery, which should help further with providing high-frequency launch capabilities as it’ll mean they don’t have to build entirely new launch vehicles for every mission.
An expensive experiment in global distribution has been abandoned by Adidas, which has announced that will close its robotic “Speedfactories” in Atlanta and Ansbach, Germany, within 6 months. The company sugar-coated the news with a promise to repurpose the technology used at its existing human-powered factories in Asia.
The factories were established in 2016 (Ansbach) and 2017 (Atlanta) as part of a strategy to decentralize its manufacturing processes. The existing model, like so many other industries, is to produce the product in eastern Asia, where labor and overhead is less expensive, then ship it as needed. But this is a slow and clumsy model for an industry that moves as quickly as fashion and athletics.
“Right now, most of our products are made out of Asia and we put them on a boat or on a plane so they end up on Fifth Avenue,” said Adidas CMO Eric Liedtke in an interview last year at Disrupt SF about new manufacturing techniques. The Speedfactories were intended to change that: “Instead of having some sort of micro-distribution center in Jersey, we can have a micro-factory in Jersey.”
Ultimately this seems to have proven more difficult than expected. As other industries have found in the rush to automation, it’s easy to overshoot the mark and overcommit when the technology just isn’t ready.
Robotic factories are a powerful tool but difficult to quickly reconfigure or repurpose, since it takes specialty knowledge to set up racks of robotic arms, computer vision systems, and so on. Robotics manufacturers are making advances in this field, but for now it’s a whole lot harder than training a human workforce to use standard tools on a different pattern.
In a press release, Adidas global operations head Martin Shankland explained that “The Speedfactories have been instrumental in furthering our manufacturing innovation and capabilities,” and that for a short time they even brought products to market in a hurry. “That was our goal from the start,” he says, though presumably things played out a bit differently in the pitch decks from 2016.
“We very much regret that our collaboration in Ansbach and Atlanta has come to an end,” Shankland said. Oechsler, the high-tech manufacturing partner that Adidas worked with, feels the same. “Whilst we understand adidas’ reasons for discontinuing Speedfactory production at Oechsler, we regret this decision,” said the company’s CEO, Claudius Kozlik, in the press release. The factories will shut down by April, presumably eliminating or shifting the 160 or so jobs they provided, but the two companies will continue to work together.
The release says that Adidas will “use its Speedfactory technologies to produce athletic footwear at two of its suppliers in Asia” starting next year. It’s not really clear what that means, and I’ve asked the company for further information.
MIT’s Biomimetics Robotics department took a whole herd of its new ‘mini cheetah’ robots out for a group demonstration on campus recently – and the result is an adorable, impressive display of the current state of robotic technology in action.
The school’s students are seen coordinating the actions of 9 of the dog-sized robots running through a range of activities, including coordinated movements, doing flips, springing in slow motion from under piles of fall leaves, and even playing soccer.
The mini cheetah weights just 20 lbs, and its design was revealed for the first time earlier this year by a team of robot developers working at MIT’s Department of Mechanical Engineering. The mini cheetah is a shrunk-down version of the Cheetah 3, a much larger and more expensive to produce robot that is far less light on its feet, and not quite so customizable.
The mini cheetah was designed for Lego-like assembly using off-the-shelf part, as well as durability and relative low cost. It can walk both right-side up, and upside down, and its most impressive ability just might be the way it can manage a full backflip from a stand-still. It can also run at a speed of up to 5 miles per hour.
Researchers working on the robot set out to build a team of them after demonstrating that first version back in May, and are now working with other teams at MIT to loan them out for additional research.
Researchers at MIT have developed a new method of navigation for robots that could be very useful for the range of companies working on autonomous last-mile delivery. In short, the team has worked out how a robot can figure out the location of a front door, without being provided a specific map in advance.
Most last-mile autonomous delivery robots today, including the ‘wheeled cooler’-style variety that was pioneered by Starship and has since been adopted by a number of other companies, including Postmates, basically meet customers at the curb. Mapping isn’t the only barrier to having future delivery bots go all the way to the door, just like the humans who make those deliveries today.
MIT News points out that mapping an entire neighborhood with the level of specificity required to do true front-door delivery would be incredibly difficult – particularly at national (let alone global) scale. Since that seems unlikely to happen, and especially unlikely for every company looking at building autonomous delivery networks to source separately, they set out to devise a navigation method that lets a robot process cues in its surroundings on the fly to figure out a front door’s location.
This is a variation on what you may have heard of referred to as SLAM, or simultaneous localization and mapping. The MIT team’s innovative twist on this approach is that in place of a semantic map, wherein the robot identifies objects in its surroundings and labels them, they devised a ‘cost-to-go’ map, which uses data from training maps to color-code the surroundings into a heat map wherein it can determine which parts are more likely to be close to a ‘front door’ and which are not, and immediately chart the most efficient path to the door based on that info.
It’s a much, much more simplified version of what we do when we encounter new environments we’ve never seen directly before – you know what’s likely to be the front door of a house you’ve never seen just by looking at it, and you know that essentially because you’re comparing it against your memory of past houses and how those properties have been laid out, even if you’re doing that all without even thinking about it.
Delivery is only one use case for this kind of intelligent local environment mapping, but it’s a good one that might see actual commercial use sooner rather than later.
MIT’s Computer Science and Artificial Intelligence Laboratory has come up with a clever way for its small cube-like robots, which can move on their own, to communicate and coordinate with one another for self-assembly. The behavior is described by MIT researchers as somewhat ‘hive-like,’ and in the video above you can see what they mean by that.
These cube bots can roll across the ground, navigate up and across each other, and even jump short distances. And thanks to recent improvements made by the team working on the project, they can also communicate in a basic way using unique barcode identifiers on the faces of the blocks to allow them to identify one another. These 16 blocks can now use their communication system and their ability to move themselves around to perform tasks including producing various shapes, or even following arrows or light signals.
Their current abilities are pretty limited, but the researchers envision a time when a larger and more advanced version of this system could be use to deploy efficiently self-assembling bots that can create structures like bridges, ramps or even staircases for use in disaster response or rescue scenarios. Of course, they also theorize these things might be pretty attractive for more mundane applications like gaming, too.
The Station is back for another week of news and analysis on all the ways people and goods move from Point A to Point B — today and in the future. As always, I’m your host Kirsten Korosec, senior reporter at TechCrunch.
Portions of the newsletter will be published as an article on the main site after it has been emailed to subscribers (that’s what you’re reading now). To get everything, you have to sign up. And it’s free. To subscribe, go to our newsletters page and click on The Station.
This week, we’re looking at factories in China, scooters in San Francisco and touchscreens in cars, among other things.
Please reach out anytime with tips and feedback. Tell us what you love and don’t love so much. Email me at firstname.lastname@example.org to share thoughts, opinions or tips or send a direct message to @kirstenkorosec.
Uber, Lime and Spin each deployed 500 electric scooters in San Francisco as part of the city’s permitting program. This means residents in SF can now choose from Uber-owned JUMP, Lime, Spin or Scoot scooters. Unfortunately for Skip, the company did not receive a permit to continue operating in the city, which means layoffs at the local level are afoot, Skip CEO Sanjay Dastoor said earlier this week.
Meanwhile, former Uber executive Dmitry Shevelenko unveiled Tortoise, an autonomous repositioning software for micromobility operators. The idea is to help make it easier for these companies to more strategically deploy their respective vehicles and reposition them when needed.
Let’s close this section with the obligatory funding round. Wheels, a pedal-less electric bike-share startup, raised a $50 million round led by DBL Partners. That brought its total funding to $87 million.
Oh, but wait, TC reporter Romain Dillet reminded us that micromobbin’ happens outside of the U.S. too. Uber also announced this past week that it has integrated its app with French startup Cityscoot, which has a fleet of free-floating moped-style scooters.
This is the latest example of Uber’s plan to become a super mobility app that goes well beyond its own network of ride-hailing vehicles.
— Megan Rose Dickey
We’ve seen a lot of different approaches when it comes to engaging with connected car services: head-up displays on the windshield, small screens perched on the dashboard, interactive voice and, of course, connections and mounts for smartphones.
But how about if your whole car becomes the touchscreen? A startup called Sentons is working on technology that could make that happen. The company uses a technique involving processors and AI that emit and read ultrasound to detect physical movement on a surface, such as touch, force or gestures, and users can create “virtual controls” on the fly that work on these surfaces.
This week, it released SurfaceWave, a software and hardware stack that works on glass, metal and plastic surfaces of smartphones.
CEO Jess Lee says the next iterations are going to be the kinds of materials that are used to make car dashboards and other interior surfaces you find inside the vehicle, including leather, thicker plastic and other materials. The company is already engaging with automotive companies, Lee told TechCrunch.
I can see a lot of possibilities for this in the human-driven vehicles of today. We’ve already seen how Tesla has changed how we think about infotainment systems in cars. And then there’s electric vehicle startup Byton, which plans to bring a vehicle to market with a touchscreen that extends along the entire dashboard.
The real opportunity for Sentons will be with autonomous vehicles, a product that will afford its passengers more leisure time.
— Ingrid Lunden
Earlier this week, Tesla was given the OK to begin producing vehicles at its $2 billion factory in Shanghai. Tesla was added to the Ministry of Industry and Information Technology’s list of approved automotive manufacturers.
Now we’ll watch and wait to see if production starts this month. Expect the topic of China and this factory to come up during Tesla’s earnings call with analysts October 23.
In other China factory news, we hear that electric vehicle startup Byton plans to host a splashy opening ceremony in early November for its new plant. The event will include lots of Chinese officials, company executives and maybe a preview of a near-final production version of its M-Byte vehicle.
Byton’s factory in Nanjing covers some 800,000 square meters (8.6 million square feet) funded with a total investment of more than $1.5 billion. Over the summer, the walls and roof went up, equipment was installed and commissioning began in five major workshops: stamping, welding, paint, battery and assembly.
The plant will begin trial production in late 2019.
This all sounds great, but there have been challenges, and the constant requirement for capital is one of them. Byton has delayed the launch of the production version of the M-Byte by two quarters. It’s now looking like commercial production will begin by the end of the second quarter of 2020.
Here are a couple of interesting tidbits for those manufacturing geeks out there:
We hear a lot. But we’re not selfish. Let’s share. A little bird is where we pass along insider tips and what we’re hearing or finding from reliable, informed sources in the industry. This isn’t a place for unfounded gossip.
To get a “little bird” and the rest of the newsletter, please subscribe. Just go to our newsletters page and click on The Station.
I recently spoke to Randol Aikin, the head of systems engineering at self-driving trucks startup Ike Robotics, about the company’s approach, which is based on a methodology developed at MIT called Systems Theoretic Process Analysis. STPA is the foundation for Ike’s product development.
The company also released a wickedly long safety report (it’s halfway down that landing page in the link provided).
The complete interview was included in the emailed newsletter. Yet another reason to subscribe to this free newsletter. Here’s one quote from the interview with Aikin:
We asked the question, what do we have to prove to ourselves and demonstrate in order to be on a public road safely? It’s the same question that we’re going to have to answer for the product as well, which is, what do we need to prove to assure that we’re safe to operate without a human in the cab?
It’s one of the huge unproven hypotheses. Anybody in this space that doesn’t consider that to be a huge technical challenges is ignoring a really thorny and important question.
Our mobility coverage extends to Extra Crunch. Check out my latest article on who will own the future of transportation based on insights from Zoox CEO Aicha Evans and former Michigan Gov. Jennifer Granholm. The idea here is to explore some of the nuances of this loaded question.
Extra Crunch requires a paid subscription and you can sign up here.
Volvo Group has established a new dedicated business group focused on autonomous transportation, with a mandate that covers industry segments like mining, ports and moving goods between logistics hubs of all kinds. The vehicle maker has already been active in putting autonomous technology to work in these industries, with self-driving projects — including at a few quarries and mines, and in the busy port located at Gothenburg, Sweden.
The company sees demand for this kind of autonomous technology use growing, and decided to establish an entire business unit to address it. The newly formed group will be called Volvo Autonomous Solutions, and its official mission is to “accelerate the development, commercialization and sales of autonomous transport solutions,” focused on the kind of transportation “where there is a need to move large volumes of goods and material on pre-defined routes, in receptive flows.”
Their anticipation of the growth of this sector comes in part from direct customer feedback, the automaker notes. It’s seen “significant increase in inquires from customers,” according to a statement from Martin Lundstedt, Volvo Group’s president and CEO.
Officially, Volvo Autonomous Solutions won’t be a formal new business area under its parent company until January 2020, but the company is looking for a new head of the unit already, and it’s clear they see a lot of potential in this burgeoning market.
Unlike autonomous driving for consumer automobiles, this kind of self-driving for fixed-route goods transportation is a nice match to the capabilities of technology as they exist today. These industrial applications eliminate a lot of the chaos and complexity of driving in, say, urban environments and with a lot of other human-driven vehicles on the road, and their routes are predictable and repeatable.