Samsung, which once led the smartphone market in India, slid to the third position in the quarter that ended in December even as the South Korean giant continues to make major bets on the rare smartphone market that is still growing.
According to research firm Counterpoint, Chinese firm Vivo surpassed Samsung to become the second biggest smartphone vendor in India in Q4 2019. Xiaomi, with command over 27% of the market, maintained its top stop in the nation for the 10th consecutive quarter. A Samsung spokesperson in India did not respond to a request for comment.
Vivo’s annual smartphone shipment grew 76% in 2019. The Chinese firm’s aggressive positioning of budget S series of smartphones in the brick and mortar market and expansion into e-commerce sales helped it beat Samsung, said Counterpoint analysts. Vivo’s market share jumped 132% between Q4 of 2018 and Q4 of 2019, according to the research firm.
Realme, which spun out of Chinese smartphone maker Oppo, claimed the fifth spot. Oppo assumed the fourth. Realme has taken the Indian market by a storm. The two-year-old firm has replicated Xiaomi’s playbook in the country and so far focused on selling aggressively low-cost Android smartphones online.
The report, released late Friday (local time), also states that India, with 158 million smartphone shipments in 2019, took over the U.S. in annual smartphone shipment for the first time.
India, which was already the world’s second largest smartphone market for total handset install base, is now also the second largest smartphone market for annual shipment of smartphones in a year.
Tarun Pathak, a senior analyst at Counterpoint, told TechCrunch that about 150 million to 155 million smartphone units were shipped in the U.S. in 2019.
More to follow…
Smart speaker manufacturer Sonos clarified its stance when it comes to old devices that are no longer supported. The company faced some criticisms after its original announcement. Sonos now says that you’ll be able to create two separate Sonos systems so that your newer devices stay up to date.
If you use a Zone Player, Connect, first-generation Play:5, CR200, Bridge or pre-2015 Connect:Amp, Sonos is still going to drop support for those devices. According to the company, those devices have reached their technical limits when it comes to memory and processing power.
While nothing lasts forever, it’s still a shame that speakers that work perfectly fine are going to get worse over time. For instance, if Spotify and Apple Music update their application programming interface in the future, your devices could stop working with those services altogether.
But the announcement felt even more insulting as the company originally said that your entire ecosystem of Sonos devices would stop receiving updates so that all your devices remain on the same firmware version. Even if you just bought a Sonos One, it would stop receiving updates if there’s an old speaker on your network.
“We are working on a way to split your system so that modern products work together and get the latest features, while legacy products work together and remain in their current state,” the company writes.
It’s not ideal, but the company is no longer holding your Sonos system back. Sonos also clarifies that old devices will still receive security updates and bug fixes — but there won’t be any new features.
I still think Sonos should add a computing card slot to its devices. This way, you wouldn’t have to replace speakers altogether. You could get a new computing card with more memory and faster processors and swap your existing card. Modularity is going to be essential if tech companies want to adopt a more environmentally friendly stance.
Lidar sensors are likely to be essential to autonomous vehicles, but if there are none of the latter, how can you make money with the former? Among the industry executives I spoke with, the outlook is optimistic as they unhitch their wagons from the sputtering star of self-driving cars. As it turns out, a few years of manic investment does wonders for those who have the wisdom to apply it properly.
The show floor at CES 2020 was packed with lidar companies exhibiting in larger spaces, seemingly in greater numbers than before. That seemed at odds with reports that 2019 had been a sort of correction year for the industry, so I met with executives and knowledgeable types at several companies to hear their take on the sector’s transformation over the last couple of years.
As context, 2017 was perhaps peak lidar, nearing the end of several years of nearly feverish investment in a variety of companies. It was less a gold rush than a speculative land rush: autonomous vehicles were purportedly right around the corner and each would need a lidar unit… or five. The race to invest in a winner was on, leading to an explosion of companies claiming ascendancy over their rivals.
Unfortunately, as many will recall, autonomous cars seem to be no closer today than they were then, as the true difficulty of the task dawned on those undertaking it.
If robots are to help out in places like hospitals and phone repair shops, they’re going to need a light touch. And what’s lighter than not touching at all? Researchers have created a gripper that uses ultrasonics to suspend an object in midair, potentially making it suitable for the most delicate tasks.
It’s done with an array of tiny speakers that emit sound at very carefully controlled frequencies and volumes. These produce a sort of standing pressure wave that can hold an object up or, if the pressure is coming from multiple directions, hold it in place or move it around.
This kind of “acoustic levitation,” as it’s called, is not exactly new — we see it being used as a trick here and there, but so far there have been no obvious practical applications. Marcel Schuck and his team at ETH Zürich, however, show that a portable such device could easily find a place in processes where tiny objects must be very lightly held.
A small electric component, or a tiny oiled gear or bearing for a watch or micro-robot, for instance, would ideally be held without physical contact, since that contact could impart static or dirt to it. So even when robotic grippers are up to the task, they must be kept clean or isolated. Acoustic manipulation, however, would have significantly less possibility of contamination.
Another, more sinister-looking prototype.
The problem is that it isn’t obvious exactly what combination of frequencies and amplitudes are necessary to suspend a given object in the air. So a large part of this work was developing software that can easily be configured to work with a new object, or programmed to move it in a specific way — rotating, flipping, or otherwise moving it at the user’s behest.
A working prototype is complete, but Schuck plans to poll various industries to see whether and how such a device could be useful to them. Watchmaking is of course important in Switzerland, and the parts are both small and sensitive to touch. “Toothed gearwheels, for example, are first coated with lubricant, and then the thickness of this lubricant layer is measured. Even the faintest touch could damage the thin film of lubricant,” he points out in the ETHZ news release.
How would a watchmaker use such a robotic arm? How would a designer of microscopic robots, or a biochemist? The potential is clear but not necessarily obvious. Fortunately he has a bit of fellowship cash to spend on the question and hopes to spin it off as a startup next year if his early inquiries bear fruit.
Farming is one of the oldest professions, but today those amber waves of grain (and soy) are a test bed for sophisticated robotic solutions to problems farmers have had for millennia. Learn about the cutting edge (sometimes literally) of agricultural robots at TC Sessions: Robotics+AI on March 3 with the founders of Traptic, Pyka and FarmWise.
Traptic, and its co-founder and CEO Lewis Anderson, you may remember from Disrupt SF 2019, where it was a finalist in the Startup Battlefield. The company has developed a robotic berry picker that identifies ripe strawberries and plucks them off the plants with a gentle grip. It could be the beginning of a new automated era for the fruit industry, which is decades behind grains and other crops when it comes to machine-based harvesting.
FarmWise has a job that’s equally delicate yet involves rough treatment of the plants — weeding. Its towering machine trundles along rows of crops, using computer vision to locate and remove invasive plants, working 24/7, 365 days a year. CEO Sebastian Boyer will speak to the difficulty of this task and how he plans to evolve the machines to become “doctors” for crops, monitoring health and spontaneously removing pests like aphids.
Pyka’s robot is considerably less earthbound than those: an autonomous, all-electric crop-spraying aircraft — with wings! This is a much different challenge from the more stable farming and spraying drones like those of DroneSeed and SkyX, but the choice gives the craft more power and range, hugely important for today’s vast fields. Co-founder Michael Norcia can speak to that scale and his company’s methods of meeting it.
These three companies and founders are at the very frontier of what’s possible at the intersection of agriculture and technology, so expect a fruitful conversation.
$150 early-bird savings end on February 14! Book your $275 Early-Bird Ticket today and put that extra money in your pocket.
Students, grab your super-discounted $50 tickets right here. You might just meet your future employer/internship opportunity at this event.
Startups, we only have five demo tables left for the event. Book your $2,200 demo table here and get in front of some of today’s leading names in the biz. Each table comes with four tickets to attend the show.
Smart speaker manufacturer Sonos has announced that the company is going to drop support for some of its products. Sonos stopped selling these devices a few years ago. While nothing lasts forever, dropping support is going to have a lot of implications and shows once again that the connected home isn’t as future-proof as expected.
Sonos points out that 92% of the products that it has ever sold are still in use today. It means that some people are still happily using old Sonos devices even though production has stopped since then.
“However, we’ve now come to a point where some of the oldest products have been stretched to their technical limits in terms of memory and processing power,” the company writes.
If you use a Zone Player, Connect, first-generation Play:5, CR200, Bridge or pre-2015 Connect:Amp, Sonos is basically going to make your Sonos experience worse across the board.
The company is going to stop shipping updates to those devices. If Spotify and Apple Music update their application programming interface in the future, your devices could stop working with those services altogether.
But Sonos has decided that your entire ecosystem of Sonos devices is going to stop receiving updates so that all your devices are on the same firmware version. For instance, if you just bought a Sonos One but you’re still using an old Sonos Play:5, your Sonos One isn’t going to receive updates either.
The company says that you can get a discount if you replace your old device. But it will still cost you some money. It’s also ironic as the company promises a seamless music experience but then requires you to swap out speakers altogether.
Sonos should use this opportunity to rethink its product lineup. Planned obsolescence due to end-of-life is a great business model for sure. But it’s time to think about ways to keep your speakers for 10, 20 or even 30 years.
People in the 1980s would buy beautiful speakers and keep them for decades. Sure, they’d have to add a CD player in their system at some point. But modularity is a great feature.
Sonos should add a computing card slot to its devices. As systems on a chip, Wi-Fi and Bluetooth get faster and more efficient, users should be able to swap out the computing card for a new one without replacing the speaker altogether.
That would be a more environmental-friendly process than bricking old devices with their questionable recycle mode.
Samsung, which once led India’s smartphone market, is investing $500 million in its India operations to set up a manufacturing plant to produce displays at the outskirts of New Delhi.
The company disclosed the investment and its plan in a filing to the local regulator earlier this month. The South Korean giant said the plant would produce displays for smartphones as well as a wide-range of other electronics devices.
In the filing, the company disclosed that it would be using some land for the new plant from its existing factory in Noida.
In 2018, Samsung opened a factory in Noida that it claimed was the world’s largest mobile manufacturing plant. For that factory, the company had committed about $700 million.
The new factory should help Samsung further increase its capacity to produce smartphone components locally and access a range of tax benefits New Delhi offers.
Samsung is now the second largest smartphone player in India, which is the world’s second largest market with nearly 500 million smartphone users. The company in recent months has also lost market share to Chinese brand Realme, which is poised to take over Samsung in the quarter that ended in December last year, according to research firms.
TechCrunch has reached out to Samsung for comment.
It seems like every company making lidar has a new and clever approach, but Baraja takes the cake. Its method is not only elegant and powerful, but fundamentally avoids many issues that nag other lidar technologies. But it’ll need more than smart tech to make headway in this complex and evolving industry.
To understand how lidar works in general, consult my handy introduction to the topic. Essentially a laser emitted by a device skims across or otherwise very quickly illuminates the scene, and the time it takes for that laser’s photons to return allows it to quite precisely determine the distance of every spot it points at.
But to picture how Baraja’s lidar works, you need to picture the cover of Pink Floyd’s “Dark Side of the Moon.”
GIFs kind of choke on rainbows, but you get the idea.
Imagine a flashlight shooting through a prism like that, illuminating the scene in front of it — now imagine you could focus that flashlight by selecting which color came out of the prism, sending more light to the top part of the scene (red and orange) or middle (yellow and green). That’s what Baraja’s lidar does, except naturally it’s a bit more complicated than that.
The company has been developing its tech for years with the backing of Sequoia and Australian VC outfit Blackbird, which led a $32 million round late in 2018 — Baraja only revealed its tech the next year and was exhibiting it at CES, where I met with co-founder and CEO Federico Collarte.
“We’ve stayed in stealth for a long, long time,” he told me. “The people who needed to know already knew about us.”
The idea for the tech came out of the telecommunications industry, where Collarte and co-founder Cibby Pulikkaseril thought of a novel use for a fiber optic laser that could reconfigure itself extremely quickly.
“We thought if we could set the light free, send it through prism-like optics, then we could steer a laser beam without moving parts. The idea seemed too simple — we thought, ‘if it worked, then everybody would be doing it this way,’ ” he told me, but they quit their jobs and worked on it for a few months with a friends and family round, anyway. “It turns out it does work, and the invention is very novel and hence we’ve been successful in patenting it.”
Rather than send a coherent laser at a single wavelength (1550 nanometers, well into the infrared, is the lidar standard), Baraja uses a set of fixed lenses to refract that beam into a spectrum spread vertically over its field of view. Yet it isn’t one single beam being split but a series of coded pulses, each at a slightly different wavelength that travels ever so slightly differently through the lenses. It returns the same way, the lenses bending it the opposite direction to return to its origin for detection.
It’s a bit difficult to grasp this concept, but once one does it’s hard to see it as anything but astonishingly clever. Not just because of the fascinating optics (something I’m partial to, if it isn’t obvious), but because it obviates a number of serious problems other lidars are facing or about to face.
First, there are next to no moving parts whatsoever in the entire Baraja system. Spinning lidars like the popular early devices from Velodyne are being replaced at large by ones using metamaterials, MEMS, and other methods that don’t have bearings or hinges that can wear out.
Baraja’s “head” unit, connected by fiber optic to the brain.
In Baraja’s system, there are two units, a “dumb” head and an “engine.” The head has no moving parts and no electronics; it’s all glass, just a set of lenses. The engine, which can be located nearby or a foot or two away, produces the laser and sends it to the head via a fiber-optic cable (and some kind of proprietary mechanism that rotates slowly enough that it could theoretically work for years continuously). This means it’s not only very robust physically, but its volume can be spread out wherever is convenient in the car’s body. The head itself also can be resized more or less arbitrarily without significantly altering the optical design, Collarte said.
Second, the method of diffracting the beam gives the system considerable leeway in how it covers the scene. Different wavelengths are sent out at different vertical angles; a shorter wavelength goes out toward the top of the scene and a slightly longer one goes a little lower. But the band of 1550 +/- 20 nanometers allows for millions of fractional wavelengths that the system can choose between, giving it the ability to set its own vertical resolution.
It could for instance (these numbers are imaginary) send out a beam every quarter of a nanometer in wavelength, corresponding to a beam going out every quarter of a degree vertically, and by going from the bottom to the top of its frequency range cover the top to the bottom of the scene with equally spaced beams at reasonable intervals.
But why waste a bunch of beams on the sky, say, when you know most of the action is taking place in the middle part of the scene, where the street and roads are? In that case you can send out a few high frequency beams to check up there, then skip down to the middle frequencies, where you can then send out beams with intervals of a thousandth of a nanometer, emerging correspondingly close together to create a denser picture of that central region.
If this is making your brain hurt a little, don’t worry. Just think of Dark Side of the Moon and imagine if you could skip red, orange and purple, and send out more beams in green and blue — and because you’re only using those colors, you can send out more shades of green-blue and deep blue than before.
Third, the method of creating the spectrum beam provides against interference from other lidar systems. It is an emerging concern that lidar systems of a type could inadvertently send or reflect beams into one another, producing noise and hindering normal operation. Most companies are attempting to mitigate this by some means or another, but Baraja’s method avoids the possibility altogether.
“The interference problem — they’re living with it. We solved it,” said Collarte.
The spectrum system means that for a beam to interfere with the sensor it would have to be both a perfect frequency match and come in at the precise angle at which that frequency emerges from and returns to the lens. That’s already vanishingly unlikely, but to make it astronomically so, each beam from the Baraja device is not a single pulse but a coded set of pulses that can be individually identified. The company’s core technology and secret sauce is the ability to modulate and pulse the laser millions of times per second, and it puts this to good use here.
Collarte acknowledged that competition is fierce in the lidar space, but not necessarily competition for customers. “They have not solved the autonomy problem,” he points out, “so the volumes are too small. Many are running out of money. So if you don’t differentiate, you die.” And some have.
Instead companies are competing for partners and investors, and must show that their solution is not merely a good idea technically, but that it is a sound investment and reasonable to deploy at volume. Collarte praised his investors, Sequoia and Blackbird, but also said that the company will be announcing significant partnerships soon, both in automotive and beyond.
Xiaomi said today it is spinning off POCO, a sub-smartphone brand it created in 2018, as a standalone company that will now run independently of the Chinese electronics giant and make its own market strategy.
The move comes months after a top POCO executive — Jai Mani, a former Googler — and some other founding and core members left the sub-brand. The company today insisted that POCO F1, the only smartphone to be launched under the POCO brand, remains a “successful” handset. The POCO F1, a $300 smartphone, was launched in 50 markets.
Manu Kumar Jain, VP of Xiaomi, said POCO had grown into its own identity in a short span of time. “POCO F1 is an extremely popular phone across user groups, and remains a top contender in its category even in 2020. We feel the time is right to let POCO operate on its own now, which is why we’re excited to announce that POCO will spin off as an independent brand,” he said in a statement.
A Xiaomi spokesperson confirmed to TechCrunch that POCO is now an independent company, but did not share how it would be structured.
Xiaomi created the POCO brand to launch high-end, premium smartphones that would compete directly with flagship smartphones of OnePlus and Samsung. In an interview with yours truly in 2018, Alvin Tse, the head of POCO, and Mani, said that they were working on a number of smartphones and were also thinking about other gadget categories.
At the time, the company had 300 people working on POCO, and they “shared resources” with the parent firm.
“The hope is that we can open up this new consumer need …. If we can offer them something compelling enough at a price point that they have never imagined before, suddenly a lot of people will show interest in availing the top technologies,” Tse said in that interview.
It is unclear, however, why Xiaomi never launched more smartphones under the POCO brand — despite the claimed success.
In the years since, Xiaomi, which is known to produce low-end and mid-range smartphones, itself launched a number of high-end smartphones, such as the K20 Pro. Indeed, earlier this week, Xiaomi announced it was planning to launch a number of premium smartphones in India, its most important market and where it is the top handset vendor.
“These launches will be across categories which we think will help ‘Mi’ maintain consumer interest in 2020. We also intend to bring the premium smartphones from the Mi line-up, which has recorded a substantial interest since we entered the market,” said Raghu Reddy, head of Categories at Xiaomi India, in a statement.
That sounds like an explanation. As my colleague Rita pointed out last year, Chinese smartphone makers have launched sub-brands in recent years to launch handsets that deviate from their company’s brand image. Xiaomi needed POCO because its Mi and Redmi smartphone brands are known for their mid-range and low-tier smartphones. But when the company itself begins to launch premium smartphones — and gain traction — the sub-brand might not be the best marketing tool.
Tarun Pathak, a senior analyst at research firm Counterpoint, told TechCrunch that the move would allow the Mi brand to flourish in the premium smartphone tier as the company begins to seriously look at 5G adoption.
“POCO can continue to make flagship-class devices, but at lower price points and 4G connectivity. 5G as a strategy requires a premium series which has consistent message across geographies…and Mi makes that cut in a more efficient way than POCO,” he said.
Besides, Xiaomi has bigger things to worry about.
In our recent Xiaomi’s earnings coverage, we noted that the Chinese electronics giant was struggling to expand its internet services business as it attempts to cut reliance on its gadgets empire. Xiaomi posted Q3 revenue of 53.7 billion yuan, or $7.65 billion, up 3.3% from 51.95 billion yuan ($7.39 billion) revenue it reported in Q2 and 5.5% rise since Q3 2018.
On top of that, the smartphone business revenue of Xiaomi, which went public in 2018, stood at 32.3 billion yuan ($4.6 billion) in Q3 last year, down 7.8% year-over-year. The company, which shipped 32.1 million smartphone units during the period, blamed “downturn” in China’s smartphone market for the decline.
Try as they might, even the most advanced roboticists on Earth struggle to recreate the effortless elegance and efficiency with which birds fly through the air. The “PigeonBot” from Stanford researchers takes a step towards changing that by investigating and demonstrating the unique qualities of feathered flight.
On a superficial level, PigeonBot looks a bit, shall we say, like a school project. But a lot of thought went into this rather haphazard looking contraption. Turns out the way birds fly is really not very well understood, as the relationship between the dynamic wing shape and positions of individual feathers are super complex.
Mechanical engineering professor David Lentink challenged some of his graduate students to “dissect the biomechanics of the avian wing morphing mechanism and embody these insights in a morphing biohybrid robot that features real flight feathers,” taking as their model the common pigeon — the resilience of which Lentink admires.
As he explains in an interview with the journal Science:
The first Ph.D.student, Amanda Stowers, analyzed the skeletal motion and determined we only needed to emulate the wrist and finger motion in our robot to actuate all 20 primary and 20 secondary flight feathers. The second student, Laura Matloff,uncovered how the feathers moved via a simple linear response to skeletal movement. The robotic insight here is that a bird wing is a gigantic underactuated system in which a bird doesn’t have to constantly actuate each feather individually. Instead, all the feathers follow wrist and finger motion automatically via the elastic ligament that connects the feathers to the skeleton. It’s an ingenious system that greatly simplifies feather position control.
In addition to finding that the individual control of feathers is more automatic than manual, the team found that tiny microstructures on the feathers form a sort of one-way Velcro-type material that keeps them forming a continuous surface rather than a bunch of disconnected ones. These and other findings were published in Science, while the robot itself, devised by “the third student,” Eric Chang, is described in Science Robotics.
Using 40 actual pigeon feathers and a super-light frame, Chang and the team made a simple flying machine that doesn’t derive lift from its feathers — it has a propeller on the front — but uses them to steer and maneuver using the same type of flexion and morphing as the birds themselves do when gliding.
Studying the biology of the wing itself, then observing and adjusting the PigeonBot systems, the team found that the bird (and bot) used its “wrist” when the wing was partly retracted, and “fingers” when extended, to control flight. But it’s done in a highly elegant fashion that minimizes the thought and the mechanisms required.
It’s the kind of thing that could inform improved wing design for aircraft, which currently rely in many ways on principles established more than a century ago. Passenger jets, of course, don’t need to dive or roll on short notice, but drones and other small craft might find the ability extremely useful.
“The underactuated morphing wing principles presented here may inspire more economical and simpler morphing wing designs for aircraft and robots with more degrees of freedom than previously considered,” write the researchers in the Science Robotics paper.
Up next for the team is observation of more bird species to see if these techniques are shared with others. Lentink is working on a tail to match the wings, and separately on a new bio-inspired robot inspired by falcons, which could potentially have legs and claws as well. “I have many ideas,” he admitted.
Xnor.ai, spun off in 2017 from the nonprofit Allen Institute for AI (AI2), has been acquired by Apple for about $200 million. A source close to the company corroborated a report this morning from GeekWire to that effect.
Apple confirmed the reports with its standard statement for this sort of quiet acquisition: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (I’ve asked for clarification just in case.)
Xnor.ai began as a process for making machine learning algorithms highly efficient — so efficient that they could run on even the lowest tier of hardware out there, things like embedded electronics in security cameras that use only a modicum of power. Yet using Xnor’s algorithms they could accomplish tasks like object recognition, which in other circumstances might require a powerful processor or connection to the cloud.
CEO Ali Farhadi and his founding team put the company together at AI2 and spun it out just before the organization formally launched its incubator program. It raised $2.7M in early 2017 and $12M in 2018, both rounds led by Seattle’s Madrona Venture Group, and has steadily grown its local operations and areas of business.
The $200M acquisition price is only approximate, the source indicated, but even if the final number were less by half that would be a big return for Madrona and other investors.
The company will likely move to Apple’s Seattle offices; GeekWire, visiting the Xnor.ai offices (in inclement weather, no less), reported that a move was clearly underway. AI2 confirmed that Farhadi is no longer working there, but he will retain his faculty position at the University of Washington.
An acquisition by Apple makes perfect sense when one thinks of how that company has been directing its efforts towards edge computing. With a chip dedicated to executing machine learning workflows in a variety of situations, Apple clearly intends for its devices to operate independent of the cloud for such tasks as facial recognition, natural language processing, and augmented reality. It’s as much for performance as privacy purposes.
Its camera software especially makes extensive use of machine learning algorithms for both capturing and processing images, a compute-heavy task that could potentially be made much lighter with the inclusion of Xnor’s economizing techniques. The future of photography is code, after all — so the more of it you can execute, and the less time and power it takes to do so, the better.
It could also indicate new forays in the smart home, toward which with HomePod Apple has made some tentative steps. But Xnor’s technology is highly adaptable and as such rather difficult to predict as far as what it enables for such a vast company as Apple.
3D printing isn’t the buzzy, hype-tastic topic it was just a few years ago — at least not with consumers. 3D printing news out of CES last week seemed considerably quieter than years prior; the physical booths for many 3D printing companies I saw took up fractions of the footprints they did just last year. Tapered, it seems, are the dreams of a 3D printer in every home.
In professional production environments, however, 3D printing remains a crucial tool. Companies big and small tap 3D printing to design and test new concepts, creating one-off prototypes in-house at a fraction of the cost and time compared to going back-and-forth with a factory. Sneaker companies are using it to create new types of shoe soles from experimental materials. Dentists are using it to create things like dentures and bridges in-office, in hours rather than days.
One of the companies that has long focused on pushing 3D printing into production is Formlabs, the Massachusetts-based team behind the aptly named Form series of pro-grade desktop 3D printers. The company launched its first product in 2012 after raising nearly $3 million on Kickstarter; by 2018, it was raising millions at a valuation of over a billion dollars.
Digital books may have a few advantages over ordinary ones when it comes to kids remembering their contents, according to a new study. Animations, especially ones keyed to verbal interactions, can significantly improve recall of story details — but they have to be done right.
The research, from psychologist Erik Thiessen at Carnegie Mellon University, evaluated the recall of 30 kids aged 3-5 after being read either an ordinary story book or one with animations for each page.
When asked afterwards about what they remembered, the kids who had seen the animated book tended to remember 15-20 percent more. The best results were seen when the book animated in response to the child saying or asking something about it (though this had to be done manually by the reading adult) rather than just automatically.
“Children learn best when they are more involved in the learning process,” explained Thiessen in a CMU news post. “Many digital interfaces are poorly suited to children’s learning capacities, but if we can make them better, children can learn better.”
This is not to say that all books for kids should be animated. Traditional books are always going to have their own advantages, and once you get past the picture-book stage these digital innovations don’t help much.
The point, rather, is to show that digital books can be useful and aren’t a pointless addition to a kid’s library. But it’s important that the digital features are created and tuned with an eye to improving learning, and research must be done to determine exactly how that is best accomplished.
Thiessen’s study was published in the journal Developmental Psychology.
Modern agriculture involves fields of mind-boggling size, and spraying them efficiently is a serious operational challenge. Pyka is taking on the largely human-powered spray business with an autonomous winged craft and, crucially, regulatory approval.
Just as we’ve seen with DroneSeed, this type of flying is risky for pilots, who must fly very close to the ground and other obstacles, yet also highly susceptible to automation; That’s because it involves lots of repetitive flight patterns that must be executed perfectly, over and over.
Pyka’s approach is unlike that of many in the drone industry, which has tended to use multirotor craft for their maneuverability and easy take-off and landing. But those drones can’t carry the weight and volume of pesticides and other chemicals that (unfortunately) need to be deployed at large scales.
The craft Pyka has built is more traditional, resembling a traditional one-seater crop dusting plane but lacking the cockpit. It’s driven by a trio of propellers, and most of the interior is given over to payload (it can carry about 450 pounds) and batteries. Of course, there is also a sensing suite and onboard computer to handle the immediate demands of automated flight.
Pyka can take off or land on a 150-foot stretch of flat land, so you don’t have to worry about setting up a runway and wasting energy getting to the target area. Of course, it’ll eventually need to swap out batteries, which is part of the ground crew’s responsibilities. They’ll also be designing the overall course for the craft, though the actual flight path and moment-to-moment decisions are handled by the flight computer.
Example of a flight path accounting for obstacles without human input
All this means the plane, apparently called the Egret, can spray about a hundred acres per hour, about the same as a helicopter. But the autonomous craft provides improved precision (it flies lower) and safety (no human pulling difficult maneuvers every minute or two).
Perhaps more importantly, the feds don’t mind it. Pyka claims to be the only company in the world with a commercially approved large autonomous electric aircraft. Small ones like drones have been approved left and right, but the Egret is approaching the size of a traditional “small aircraft,” like a Piper Cub.
Of course, that’s just the craft — other regulatory hurdles hinder wide deployment, like communicating with air traffic management and other craft; certification of the craft in other ways; a more robust long-range sense and avoid system and so on. But Pyka’s Egret has already flown thousands of miles at test farms that pay for the privilege. (Pyka declined to comment on its business model, customers or revenues.)
The company’s founding team — Michael Norcia, Chuma Ogunwole, Kyle Moore and Nathan White — comes from a variety of well-known companies working in adjacent spaces: Cora, Kittyhawk, Joby Aviation, Google X, Waymo and Morgan Stanley (that’s the COO).
The $11 million seed round was led by Prime Movers Lab, with participation from Y Combinator, Greycroft, Data Collective and Bold Capital Partners.
3D printing has proven itself useful in so many industries that it’s no longer necessary to show off, but some people just can’t help themselves. Case in point: this millimeter-tall rendition of Michelangelo’s famous “David” printed with copper using a newly developed technique.
The aptly named “Tiny David” was created by Exaddon, a spin-off company from another spin-off company, Cytosurge, spun off from Swiss research university ETH Zurich. It’s only a fraction of a millimeter wide and weighs two micrograms.
It was created using Exaddon’s “CERES” 3D printer, which lays down a stream of ionized liquid copper at a rate of as little as femtoliters per second, forming a rigid structure with features as small as a micrometer across. The Tiny David took about 12 hours to print, though something a little simpler in structure could probably be done much quicker.
As it is, the level of detail is pretty amazing. Although, obviously, you can’t recreate every nuance of Michelangelo’s masterpiece, even small textures like the hair and muscle tone are reproduced quite well. No finishing buff or support struts required.
Of course, we can create much smaller structures at the nanometer level with advanced lithography techniques, but that’s a complex, sensitive process that must be engineered carefully by experts. This printer can take an arbitrary 3D model and spit it out in a few hours, and at room temperature.
The CERES printer.
But the researchers do point out that there is some work involved.
“It is more than just a copy and downsized model of Michelangelo’s David,” said Exaddon’s Giorgio Ercolano in a company blog post. “Our deep understanding of the printing process has led to a new way of processing the 3D computer model of the statue and then converting it into machine code. This object has been sliced from an open-source CAD file and afterwards was sent directly to the printer. This slicing method enables an entirely new way to print designs with the CERES additive micromanufacturing system.”
Much smaller than that doesn’t work, though — Micro-David starts looking like he’s made of Play-Doh snakes. That’s fine, they’ll get there eventually.
The team published the details of their newly refined technique (it was pioneered a few years ago but is much better now) in the journal Micromachines.
DJI is easily the leading brand when it comes to camera drones, but few companies have even attempted a ground-based mobile camera platform. The company may be moving in that direction, though, if this patent for a small off-road vehicle with a stabilized camera is any indication.
The Chinese patent, first noted by DroneDJ, shows a rather serious-looking vehicle platform with chunky tires and a stabilized camera gimbal. As you can see in the image above, the camera mount is protected against shock by springs and pneumatics, which would no doubt react actively to sudden movements.
The image is no simple sketch like those you sometimes see of notional products and “just in case” patents — this looks like a fleshed-out mechanical drawing of a real device. Of course, that doesn’t mean it’s coming to market at all, let alone any time soon. But it does suggest that DJI’s engineers have dedicated real time and effort to making this thing a reality.
Why have a “drone” on the ground when there are perfectly good ones for the air? Battery life, for one. Drones can only be airborne for a short time, even less when they’re carrying decent cameras and lenses. A ground-based drone could operate for far longer — though naturally from a rather lower vantage.
Perhaps more importantly, however, a wheeled drone makes sense in places where an aerial one doesn’t. Do you really want to fly a drone through narrow hallways in security sweeps, or in your own home? And what about areas where you might encounter people? It would be better not to have to land and take off constantly for safety’s sake.
It’s likely that DJI has done its homework and knows that there are plenty of niches to which they could extend if they diversified their offerings a bit. And like so many situations where drones have become commonplace, we’ll all think of these robot-powered industries as obvious in retrospect. For instance, the winner of our Startup Battlefield at Disrupt Berlin, Scaled Robotics, which does painstaking automated inspections of construction sites.
In fact DJI already makes a ground-based robotic platform, the RoboMaster S1. This is more of an educational toy, but may have served as a test bed for technologies the company hopes to apply elsewhere.
Whether this little vehicle ever sees the light of day or not, it does make one think seriously about the possibility of a wheeled camera platform doing serious work around the home or office.
A year ago, we asked some of the most prominent smart home device makers if they have given customer data to governments. The results were mixed.
The big three smart home device makers — Amazon, Facebook and Google (which includes Nest) — all disclosed in their transparency reports if and when governments demand customer data. Apple said it didn’t need a report, as the data it collects was anonymized.
As for the rest, none had published their government data-demand figures.
In the year that’s past, the smart home market has grown rapidly, but the remaining device makers have made little to no progress on disclosing their figures. And in some cases, it got worse.
Smart home and other internet-connected devices may be convenient and accessible, but they collect vast amounts of information on you and your home. Smart locks know when someone enters your house, and smart doorbells can capture their face. Smart TVs know which programs you watch and some smart speakers know what you’re interested in. Many smart devices collect data when they’re not in use — and some collect data points you may not even think about, like your wireless network information, for example — and send them back to the manufacturers, ostensibly to make the gadgets — and your home — smarter.
Because the data is stored in the cloud by the devices manufacturers, law enforcement and government agencies can demand those companies turn over that data to solve crimes.
But as the amount of data collection increases, companies are not being transparent about the data demands they receive. All we have are anecdotal reports — and there are plenty: Police obtained Amazon Echo data to help solve a murder; Fitbit turned over data that was used to charge a man with murder; Samsung helped catch a sex predator who watched child abuse imagery; Nest gave up surveillance footage to help jail gang members; and recent reporting on Amazon-owned Ring shows close links between the smart home device maker and law enforcement.
Here’s what we found.
Smart lock and doorbell maker August gave the exact same statement as last year, that it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA).” But August spokesperson Stephanie Ng would not comment on the number of non-national security requests — subpoenas, warrants and court orders — that the company has received, only that it complies with “all laws” when it receives a legal demand.
Roomba maker iRobot said, as it did last year, that it has “not received” any government demands for data. “iRobot does not plan to issue a transparency report at this time,” but it may consider publishing a report “should iRobot receive a government request for customer data.”
Arlo, a former Netgear smart home division that spun out in 2018, did not respond to a request for comment. Netgear, which still has some smart home technology, said it does “not publicly disclose a transparency report.”
Amazon-owned Ring, whose cooperation with law enforcement has drawn ire from lawmakers and faced questions over its ability to protect users’ privacy, said last year it planned to release a transparency report in the future, but did not say when. This time around, Ring spokesperson Yassi Shahmiri would not comment and stopped responding to repeated follow-up emails.
Honeywell spokesperson Megan McGovern would not comment and referred questions to Resideo, the smart home division Honeywell spun out a year ago. Resideo’s Bruce Anderson did not comment.
And just as last year, Samsung, a maker of smart devices and internet-connected televisions and other appliances, also did not respond to a request for comment.
On the whole, the companies’ responses were largely the same as last year.
But smart switch and sensor maker Ecobee, which last year promised to publish a transparency report “at the end of 2018” did not follow through with its promise. When we asked why, Ecobee spokesperson Kristen Johnson did not respond to repeated requests for comment.
Based on the best available data, August, iRobot, Ring and the rest of the smart home device makers have hundreds of millions of users and customers around the world, with the potential to give governments vast troves of data — and users and customers are none the wiser.
Transparency reports may not be perfect, and some are less transparent than others. But if big companies — even after bruising headlines and claims of co-operation with surveillance states — disclose their figures, there’s little excuse for the smaller companies.
This time around, some companies fared better than their rivals. But for anyone mindful of their privacy, you can — and should — expect better.
If you thought the saga of the $7,000 Apple Pro Display XDR couldn’t get any more ridiculous, prepare yourself for the proverbial cherry on top: The company insists that you only use the single special cleaning cloth that comes with the monitor. If you lose it, you’re advised to order another.
Apple, already under fire from longtime users for the ever-increasing price of its products, attracted considerable ire and ridicule when it announced the high-end monitor in June. Of course there are many expensive displays out there — it was more the fact that Apple was selling the display for $5,000, the stand separately for $999, and an optional “nano-texture” coating for an additional grand.
Just wait till you see how much the Mac Pro that goes with it costs.
Technically it’s not actually a “coating” but an extremely small-scale etching of the surface that supposedly produces improved image quality without some of the drawbacks of a full-matte coating. “Typical matte displays have a coating added to their surface that scatters light. However, these coatings lower contrast while producing unwanted haze and sparkle,” the product description reads. Not so with nano-texture.
Unfortunately, the unique nature of the glass necessitates special care when cleaning.
“Use only the dry polishing cloth that comes with your display,” reads the support page How to clean your Apple Pro Display XDR. “Never use any other cloths to clean the nano-texture glass. If you lose the included polishing cloth, you can contact Apple to order a replacement polishing cloth.” (No price is listed, so I’ve asked Apple for more information.)
Obviously if you’re cleaning an expensive screen you don’t want to do it with Windex and wadded-up newspaper. But it’s not clear what differentiates Apple’s cloth from an ordinary microfiber wipe.
Do the nano-scale ridges shred ordinary mortal cloth and get fibers caught in their interstices? Can the nano-texture be damaged by anything of insufficient softness?
Apple seems to be presuming a certain amount of courage on the part of consumers, who must pay a great deal for something that not only provides an uncertain benefit (even Apple admits that the display without the coating is “engineered for extremely low reflectivity”) but seems susceptible to damage from even the lightest mishandling.
No doubt the Pro Display XDR is a beautiful display, and naturally only those who feel it is worth the price will buy one. But no one likes to have to baby their gadgets, and Apple’s devices have also gotten more fragile and less readily repairable. The company’s special cloth may be a small, even silly thing, but it’s part of a large and worrying trend.
Buildings under construction are a maze of half-completed structures, gantries, stacked materials, and busy workers — tracking what’s going on can be a nightmare. Scaled Robotics has designed a robot that can navigate this chaos and produce 3D progress maps in minutes, precise enough to detect that a beam is just a centimeter or two off.
Bottlenecks in construction aren’t limited to manpower and materials. Understanding exactly what’s been done and what needs doing is a critical part of completing a project in good time, but it’s the kind of painstaking work that requires special training and equipment. Or, as Scaled Robotics showed today at TC Disrupt Berlin 2019, specially trained equipment.
The team has created a robot that trundles autonomously around construction sites, using a 360-degree camera and custom lidar system to systematically document its surroundings. An object recognition system allows it to tell the difference between a constructed wall and a piece of sheet rock leaned against it, between a staircase and temporary stairs for electric work, and so on.
By comparing this to a source CAD model of the building, it can paint a very precise picture of the progress being made. They’ve built a special computer vision model that’s suited to the task of sorting obstructions from the constructions and identifying everything in between.
All this information goes into a software backend where the supervisors can check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas. But it’s not all about making the suits happy.
“It’s not just about getting management to buy in, you need the guy who’s going to use it every day to buy in. So we’ve made a conscious effort to fit seamlessly into what they do, and they love that aspect of it,” explained co-founder Bharath Sankaran. “You don’t need a computer scientist in the room. Issues get flagged in the morning, and that’s a coffee conversation – here’s the problem, bam, let’s go take a look at it.”
The robot can make its rounds faster than a couple humans with measuring tapes and clipboards, certainly, but also someone equipped with a stationary laser ranging device that they carry from room to room. An advantage of simultaneous location and ranging (SLAM) tech is that it measures from multiple points of view over time, building a highly accurate and rich model of the environment.
The data is assembled automatically but the robot can be either autonomous or manually controlled — in developing it, they’ve brought the weight down from about 70 kilograms to 20, meaning it can be carried easily from floor to floor if necessary (or take the elevator); and simple joystick controls mean anyone can drive it.
A trio of pilot projects concluded this year and have resulted in paid pilots next year, which is of course a promising development.
Interestingly, the team found that construction companies were using outdated information and often more or less assumed they had done everything in the meantime correctly.
“Right now decisions are being made on data that’s maybe a month old,” said co-founder Stuart Maggs. “We can probably cover 2000 square meters in 40 minutes. One of the first times we took data on a site, they were completely convinced everything they’d done was perfect. We put the data in front of them and they found out there was a structural wall just missing, and it had been missing for 4 weeks.”
The company uses a service-based business model, providing the robot and software on a monthly basis, with prices rising with square footage. That saves the construction company the trouble of actually buying, certifying, and maintaining an unfamiliar new robotic system.
But the founders emphasized that tracking progress is only the first hint of what can be done with this kind of accurate, timely data.
“The big picture version of where this is going is that this is the visual wiki for everything related to your construction site. You just click and you see everything that’s relevant,” said Sankaran. “Then you can provide other ancillary products, like health and safety stuff, where is storage space on site, predicting whether the project is on schedule.”
“At the moment, what you’re seeing is about looking at one moment in time and diagnosing it as quickly as possible,” said Maggs. “But it will also be about tracking that over time: We can find patterns within that construction process. That data feeds that back into their processes, so it goes from a reactive workflow to a proactive one.”
“As the product evolves you start unwrapping, like an onion, the different layers of functionality,” said Sankaran.
The company has come this far on $1 million of seed funding, but is hot on the track of more. Perhaps more importantly, its partnerships with construction giant PERI and Autodesk, which has helped push digital construction tools, may make it a familiar presence at building sites around the world soon.
Today’s devices have been secured against innumerable software attacks, but a new exploit called Plundervolt uses distinctly physical means to compromise a chip’s security. By fiddling with the actual amount of electricity being fed to the chip, an attacker can trick it into giving up its innermost secrets.
It should be noted at the outset that while this is not a flaw on the scale of Meltdown or Spectre, it is a powerful and unique one and may lead to changes in how chips are designed.
There are two important things to know in order to understand how Plundervolt works.
The first is simply that chips these days have very precise and complex rules as to how much power they draw at any given time. They don’t just run at full power 24/7; that would drain your battery and produce a lot of heat. So part of designing an efficient chip is making sure that for a given task, the processor is given exactly the amount of power it needs — no more, no less.
The second is that Intel’s chips, like many others now, have what’s called a secure enclave, a special quarantined area of the chip where important things like cryptographic processes take place. The enclave (here called SGX) is inaccessible to normal processes, so even if the computer is thoroughly hacked, the attacker can’t access the data inside.
The creators of Plundervolt were intrigued by recent work by curious security researchers who had, through reverse engineering, discovered the hidden channels by which Intel chips manage their own power.
Hidden, but not inaccessible, it turns out. If you have control over the operating system, which many attacks exist to provide, you can get at these “Model-Specific Registers,” which control chip voltage, and can tweak them to your heart’s content.
Modern processors are so carefully tuned, however, that such tweak will generally just cause the chip to malfunction. The trick is to tweak it just enough to cause the exact kind of malfunction you expect. And because the entire process takes place within the chip itself, protections against outside influence are ineffective.
The Plundervolt attack does just this, using the hidden registers to very slightly change the voltage going to the chip at the exact moment that the secure enclave is executing an important task. By doing so they can induce predictable faults inside SGX, and by means of these carefully controlled failures cause it and related processes to expose privileged information. It can even be performed remotely, though of course full access to the OS is a prerequisite.
In a way it’s a very primitive attack, essentially giving the chip a whack at the right time to make it spit out something good, like it’s a gumball machine. But of course it’s actually quite sophisticated, since the whack is an electrical manipulation on the scale of millivolts, which needs to be applied at exactly the right microsecond.
The researchers explain that this can be mitigated by Intel, but only through updates at the BIOS and microcode level — the kind of thing that many users will never bother to go through with. Fortunately for important systems there will be a way to verify that the exploit has been patched when establishing a trusted connection with another device.
Intel, for its part, downplayed the seriousness of the attack. “We are aware of publications by various academic researchers that have come up with some interesting names for this class of issues, including “VoltJockey” and “Plundervolt,” it wrote in a blog post acknowledging the existence of the exploit. “We are not aware of any of these issues being used in the wild, but as always, we recommend installing security updates as soon as possible.”
Plundervolt is one of a variety of attacks that have emerged recently taking advantage of the ways that computing hardware has evolved over the last few years. Increased efficiency usually means increased complexity, which means increased surface area for non-traditional attacks like this.
The researchers who discovered and documented Plundervolt hail from the UK’s University of Birmingham, Graz University of Technology in Austria, and KU Leuven in Belgium. They are presenting their paper at IEEE S&P 2020.