I did a bit of a double take when I first saw this announcement. IDrive, an online cloud storage and backup service, is launching a face recognition API today that goes up against the likes of AWS Rekognition and others. That seems like a bit of an odd move for a backup company, but it turns out that IDrive has actually been in the face recognition game for a while. Last year, the company launched IDrive People to help its users find faces in photos they’ve backed up on its service. With its API, IDrive is targeting a very different market, though, and entering into the API business for the first time.
IDrive Face, as the service is called, includes the standard tools for detecting and analyzing multiple faces within a still image that are at the core of every face detection API. For this, the API provides the usual bounding boxes and metadata for all faces. There are also a face comparison and verification features to identify people by their face, and a gender, age and emotion detection option. All requests to the API are encrypted and using the API looks to be pretty straightforward.
IDrive promises that its tool’s accuracy and performance is comparable to AWS Rekognition, but at a lower price. The company offers a developer plan for $49.50/month plus $0.0001 per transaction, at up to 75 transactions per minute, with unlimited storage included. There’s also a business plan for $124.50/month, $0.0001 per transaction and up to 500 transactions per minute, as well as custom enterprise plans and free trials for those who want to give the service a try.
AWS’ pricing is, as usual, a bit more complicated and while there’s no monthly cost, most serious users will end up paying more for Rekognition than IDrive Face, though Rekognition offers a number of features like text, object, scene and celebrity recognition that aren’t available in the competing product, which only focuses on faces.
The day of reckoning for the ‘flexible office space as a startup’ is coming, and it’s coming up fast. WeWork’s IPO filing has fired the starting gun on the race to become the game-changer both in the future of property and real estate but also the future of how we live and work. As Churchill once said, ‘we shape our buildings and afterwards our buildings shape us’.
Until recently WeWork was the ruler by which other flexible space startups were measured, but questions are now being asked if it deserves its valuation. The profitable IWG plc, formerly Regus, has been a business providing serviced offices, virtual offices, meeting rooms, and the rest, for years and yet WeWork is valued by ten times more.
That’s not to mention how it exposes landlords to $40 billion in rent commitments, something which a few of them are starting to feel rather nervous about.
Some analysts even say WeWork’s IPO is a ‘masterpiece of obfuscation’
Netflix is still the No. 1 subscription streaming service in the U.S., according to a new report from eMarketer, but rivals including Amazon Prime Video and Hulu are starting to cut into its market share. The analyst firm forecasts 182.5 million U.S. consumers will subscribe to over-the-top streaming services this year, or 53.3% of the population. Netflix is still the too choice here, with 158.8 million viewers in 2019 and it is continuing to grow. However, its share of the U.S. over-the-top subscription market will decline even as its total subscriber numbers climb, the report said.
Though Netflix announced in Q2 the first drop in U.S. users in nearly a decade, eMarketer says Netflix will see strong growth throughout the rest of the year — up 7.6% over 2018. This will be driven by the new seasons of popular series like Orange is the New Black and Stranger Things, as well as Academy Award-winning director Martin Scorsese’s new movie, The Irishman.
But Netflix is no longer the only option for streaming video these days. Back in 2014, it had 90% of the market. In 2019, its share will have shrunk to 87%.
This decline in market share is attributed to the rise of rival services, like Hulu and Prime Video.
Hulu, for example, is estimated to reach 75.8 million U.S. viewers this year, or 41.5% of subscription service users. The number of viewers will also increase by 17.5% in 2019, but this is a drop from 2018’s big growth spurt of 49.6%
Prime Video, meanwhile will remain the second-largest subscription over-the-top video provider in the U.S. in 2019, the report says, with 96.5 million viewers. That’s up 8.8% over last year.
The firm estimates Prime Video will reach a third of the U.S. population by 2021.
Netflix market share dominance is about to face new threats as well, most notably from the Disney-Hulu-ESPN bundle, which is priced the same as a standard U.S. Netflix subscription.
“Netflix has faced years of strong competition for viewers, coming from streaming video platforms, pay-TV services, and even video games,” said eMarketer forecasting analyst Eric Haggstrom. “While there is no true ‘Netflix killer’ on the market, Disney’s upcoming bundle with Disney+, Hulu and ESPN+ probably comes closest. Netflix’s answer has been to stick to what has made it the market leader—outspending the competition on both licensed and original content, offering customers a competitive price,” he added.
Disney isn’t the only one with a new streaming service in the works, though.
Apple TV+ is poised to launch later this year, and is said to be spending $6 billion on content — far more than the $1 billion that had been reported. It’s also said to be considering a competitive $9.99 per month price point.
NBCUniversal and AT&T WarnerMedia are also poised to enter the market, the latter with HBO Max. And following the CBS-Viacom merger, the combined company is looking to beef up its own platforms, CBS All Access and the ad-supported Pluto TV, with the newly acquired content.
“The market for streaming video has been driven by an explosion in high-end original content and low subscription costs relative to traditional pay TV,” Haggstrom noted. “A strong consumer appetite for new shows and movies has driven viewer growth for services like Netflix, Hulu and Amazon Prime Video, as well as the broader market.”
Amazon, which has invested over $6 billion in India’s growing internet market, just invested a little more as it moves to expand its presence in the country’s brick and mortar space that drives much of the sales in the nation. The U.S. e-commerce giant is acquiring a 49% stake in Future Coupons, a group entity owned by India’s second largest retail chain Future Retail, the latter said in a regulatory filing Thursday evening (local time).
An Amazon spokesperson told TechCrunch the investment would “enhance Amazon’s existing portfolio of investments in the payments landscape in India.” The spokesperson added, “Amazon has agreed to invest in Future Coupons Limited, which is engaged in developing innovative value-added payment products and solutions such as corporate gift cards, loyalty cards, and reward cards primarily for corporate and institutional customers.”
Financial terms of the deal were not disclosed.
“Pursuant to these agreements, Amazon has agreed to make an equity investment in Future Coupons Limited for acquiring a 49% stake comprising both, voting and non-voting shares. As part of the agreement, Amazon has been granted a call option,” Future Retail said in a filing (PDF) to the local stock exchange.
As part of the agreement, Amazon has the option to “acquire all or part of the promoters’ shareholding in Future Retail Limited” between the third and tenth year in “certain circumstances, subject to applicable law.” Future Coupons owned about 7.3% stake in Future Retail as of early this year, according to past regulatory filings.
“The Promoters have also agreed to certain share transfer restrictions on their shares in the Company for same tenure, including restrictions to not transfer shares to specified persons, a right of first offer in favor of Amazon, all of which are subject to mutually agreed exceptions (such as liquidity allowances and affiliate transfers). The transaction contemplated above is subject to obtaining applicable regulatory approvals and customary closing conditions,” Future Retail added.
Amazon has been reportedly looking to acquire as much as 10% stake in Future Retail, which operates over 2,000 stores, including “Big Bazaar” retail stores, across 400 cities in India. Bloomberg reported earlier this month that Future Retail was seeking a valuation of about $281 million for selling stakes in the firm.
Future Retail runs a wide swath of retail brands in India, covering a range of things from grocery, perishables, electronics to fashion apparels. On Thursday, Amazon India announced it was launching Amazon Fresh in parts of Bangalore. Amazon Fresh is currently offering 5,000 kinds of items including fresh fruits, vegetables, meat as well as some items from home and personal product categories.
According to earlier media reports, the company is also in talks to acquire more than 25% stake in Reliance Retail, the largest retail chain in the country. Brick and mortar stores continue to drive much of the sales in the country. Amazon also owns stake in Indian supermarket chain More, and department store chain Shopper’s Stop.
“One thing to keep in mind is that e-commerce is a very, very small portion of total retail consumption in India, probably less than 3%,” said Amit Agarwal, manager of Amazon India, in an interview this week.
Earlier this week, Amazon opened an office in Hyderabad to house over 15,000 employees, thereby making it the company’s biggest campus globally.
India has become the latest battleground for American giants Amazon and Walmart. Amazon India competes with Flipkart, which currently leads the e-commerce market in the nation. Last year, Walmart acquired a majority stake in Flipkart for $16 billion. Like Amazon, Flipkart has also made it no secret that it wants to expand into grocery and other categories.
Both Amazon India and Flipkart took a hit earlier this year in India after New Delhi government enforced some regulatory changes to the way e-commerce conduct business in the country. The changes were largely structured to help local companies.
Amazon India’s Agarwal urged the government to relax the regulatory pushes. “There is so much opportunity to just let e-commerce thrive versus trying to define every single guard rail under which it should operate. I feel e-commerce can actually accelerate India’s economy in a big way, if it’s just allowed to thrive,” he told Reuters.
AmazonFresh, one of two main grocery delivery services Amazon today operates, is expanding to new markets, the retailer announced this morning. The service will now be available to Prime members in Houston, Minneapolis, and Phoenix. Notably, this list includes a test market for Walmart’s new grocery subscription service, Delivery Unlimited; Target’s corporate headquarters; and an early test market for Walmart’s online grocery business, respectively.
Members in these cities will have access to tens of thousands of grocery items, including fresh fruits and produce, meat, seafood, and other everyday essentials, all of which can be delivered for free in two hours. Free delivery requires a $35 minimum order, or a $9.99 delivery fee will apply if the order totals less than $35.
Meanwhile, a faster, 1-hour delivery option is also available for an additional $7.99 fee.
With the launch, AmazonFresh is available in Las Vegas, Atlanta, Baltimore, Boston, Chicago, Dallas, Denver, Los Angeles, Miami, New York, Philadelphia, San Diego, San Francisco, Seattle, and Washington, D.C.
“We’re thrilled to introduce AmazonFresh to Prime members in Houston, Minneapolis and Phoenix,” said Stephenie Landry, Vice President of AmazonFresh and Prime Now, in a statement. “Prime members tell us they want their stuff even faster. We’re happy to deliver on that ask and can’t wait for customers in Houston, Minneapolis and Phoenix to take advantage of one- and two-hour delivery from AmazonFresh,” she added.
Amazon’s strategy with online grocery is a bit mixed. Today, Prime members can opt for deliveries through Prime Now, which delivers from Whole Foods markets as well as Amazon fulfillment centers, and in some areas, from local grocers. Prime Now is covered in the cost of an Amazon Prime subscription, while AmazonFresh requires an additional $14.99 per month additional fee.
It’s not clear why someone would choose AmazonFresh over Prime Now — if both were available — given the cost. The only reason may be that AmazonFresh offers a better selection in some markets. But consumers aren’t only choosing between these two options. They can also shop from Walmart’s online grocery, Instacart, Shipt, and others.
Amazon recently pushed back against an industry report that claimed AmazonFresh was struggling. The retailer argued that it’s still investing in the service, expanding it to new markets, and pointed out that it never exited entire markets — it only pulled back in some zip codes. That said, AmazonFresh has grown far slower than Prime Now, with availability in 18 markets as of this news, versus Prime Now’s nearly 100.
In addition to the convenience of shopping online or in the app, AmazonFresh also works with Alexa. Customers can say things like “Alexa, order milk from Fresh,” and Alexa will add a choice for milk to their shopping cart.
Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.
It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.
“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”
The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.
In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.
We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.
Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.
“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”
Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.
Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.
Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.
“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.
Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.
“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”
DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).
“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”
Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.
“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”
You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.
The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.
“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”
How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.
One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.
“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”
Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.
Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”
Bose’s portable speaker offerings have tended toward the cheaper end of the spectrum — bringing colorful competition for companies like JBL. With the dryly named Portable Home Speaker, however, the company looks to split the difference between portable and premium. And it’s certainly priced for the latter.
The $349 speaker looks to something of a high end take on the dearly departed Amazon Tap. It’s pretty small for the price, with a large handle up top so it can be moved from room to room, accordingly.
Bose continues to take the diplomatic approach, using built in mics for both Google Assistant and Amazon Alexa. There’s also AirPlay 2 and Spotify Connect functionality built in, covering pretty much all of its bases outside of Bixby — that means, sadly, that it might not be able to talk to your fridge.
There are a handful of physical buttons up top, as well, including the every important mic-off. The device has an IPX4 water rating, which means it will handle some splashing or light rain, but don’t dunk the thing. It’s also pretty clear from the press materials that the speaker’s not designed to live outdoors, though the occasional picnic table should be fine.
The Portable Home Speaker arrives in stores on September 19. It’s already got plenty of competition, of course, and Sonos is set to add to the list with its own bluetooth speaker rumored to be in the works.
Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.
Last week, I talked about how Netflix might have some rough times ahead as Disney barrels towards it.
There is plenty to be said about the potential of smart glasses. I write about them at length for TechCrunch and I’ve talked to a lot of founders doing cool stuff. That being said, I don’t have any idea what Snap is doing with the introduction of a third-generation of its Spectacles video sunglasses.
The first-gen were a marketing smash hit, their sales proved to be a major failure for the company which bet big and seemingly walked away with a landfill’s worth of the glasses.
Snap’s latest version of Spectacles were announced in Vogue this week, they are much more expensive at $380 and their main feature is that they have two cameras which capture images in light depth which can lead to these cute little 3D boomerangs. One one hand, it’s nice to see the company showing perseverance with a tough market, on the other it’s kind of funny to see them push the same rock up the hill again.
Snap is having an awesome 2019 after a laughably bad 2018, the stock has recovered from record lows and is trading in its IPO price wheelhouse. It seems like they’re ripe for something new and exciting, not beautiful yet iterative.
The $150 Spectacles 2 are still for sale, though they seem quite a bit dated-looking at this point. Spectacles 3 seem to be geared entirely towards women, and I’m sure they made that call after seeing the active users of previous generations, but given the write-down they took on the first-generation, something tells me that Snap’s continued experimentation here is borne out of some stubbornness form Spiegel and the higher-ups who want the Snap brand to live in a high fashion world and want to be at the forefront of an AR industry that seems to have already moved onto different things.
On to the rest of the week’s news.
Here are a few big news items from big companies, with green links to all the sweet, sweet added context:
How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:
Adam Neumann (WeWork) at TechCrunch Disrupt NY 2017
Our premium subscription service had another week of interesting deep dives. My colleague Danny Crichton wrote about the “tech” conundrum that is WeWork and the questions that are still unanswered after the company filed documents this week to go public.
…How is margin changing at its older locations? How is margin changing as it opens up in places like India, with very different costs and revenues? How do those margins change over time as a property matures? WeWork spills serious amounts of ink saying that these numbers do get better … without seemingly being willing to actually offer up the numbers themselves…
Here are some of our other top reads this week for premium subscribers. This week, we published a major deep dive into the world’s next music unicorn and we dug deep into marketplace startups.
Sign up for more newsletters in your inbox (including this one) here.
Users have said they are receiving emails from Amazon containing invoices and order updates on other customers, TechCrunch has learned.
Jake Williams, founder of cybersecurity firm Rendition Infosec, raised the alarm after he received an email from Amazon addressed to another customer with their name, postal address, and their order details.
Williams said he ordered something months ago which recently became available for shipping. He checked the email headers to make sure it was a genuine message.
“I think they legitimately intended to email me a notification that my item was shipping early,” he said. “I just think they screwed something up in the system and sent the updates to the wrong people.”
He said the apparent security lapse was worrying because emails about orders sent to the wrong place is a “serious breach of trust” that can reveal private information about a customer’s life, such as sexual orientation, proclivities, or other personal information
Several other Amazon customers also said they received emails seemingly meant for other people.
“I made an order yesterday afternoon and received her email last night,” another customer who tweeted about the mishap told TechCrunch. “Luckily I’m not a malicious person but that’s a huge security issue,” she said.
Another customer tweeted out about receiving an email meant for someone else. He said he spoke to Amazon customer service who said they will investigate additional security issues.
“Hope you didn’t send my sensitive account info to someone else,” he added.
And, one other customer posted a tweet thread about the issue, saying they spoke to a supervisor about the issue who gave a “nonchalant” response, she wrote. She said the supervisor said the issue happens frequently.
A spokesperson for Amazon did not return a request for comment when we asked how many customers were affected and if the company plans on informing customers of the breach. If we hear back, we’ll update.
It’s the second security lapse in a year. In November the company emailed customers saying a “technical error” had exposed an unknown number of their email addresses. When asked about specifics, the notoriously secretive company declined to comment further.
Postmates has officially received the green light from the city of San Francisco to begin testing its Serve wheeled delivery robot on city streets, as first reported by the SF Chronicle and confirmed with Postmates by TechCrunch. The on-demand delivery company told us last week that it expected the issuance of the permit to come through shortly after a conditional approval, and that’s exactly what happened on Wednesday thes week.
The permit doesn’t cover the entire city – just a designated area of a number of blocks in and around Potrero Hill and the Inner Mission, but it will allow Postmates to begin testing up to three autonomous delivery robots at once, at speeds of up to 3 mph. Deliveries can only take place between 8 AM and 6:30 PM on weekdays, and a human has to be on hand within 30 feet of the vehicles while they’re operating at all times. Still, it’s a start, and green light for a city regulatory environment that has had a somewhat rocky start with some less collaborative early pilots from other companies.
Autonomous delivery bot company Marble also has a permit application pending with the city’s Public Works department, and will look to test its own four-wheeled, sensor-equipped rolling delivery bots within the city soon should it be granted similar testing approval.
Postmates first revealed Serve last December, taking a more anthropomorphic approach to the vehicle’s overall design. Like many short-distance delivery robots of its ilk, it includes a lockable cargo container and screen-based user interface for eventual autonomous deliveries to customers. The competitive field for autonomous rolling delivery bots is growing continuously, with companies like Starship Technologies, Amazon and many more throwing their hats in the ring.
Every time we binge on Netflix or install a new internet-connected doorbell to our home, we’re adding to a tidal wave of data. In just 10 years, bandwidth consumption has increased 100 fold, and it will only grow as we layer on the demands of artificial intelligence, virtual reality, robotics and self-driving cars. According to Intel, a single robo car will generate 4 terabytes of data in 90 minutes of driving. That’s more than 3 billion times the amount of data people use chatting, watching videos and engaging in other internet pastimes over a similar period.
Tech companies have responded by building massive data centers full of servers. But growth in data consumption is outpacing even the most ambitious infrastructure build outs. The bottom line: We’re not going to meet the increasing demand for data processing by relying on the same technology that got us here.
The key to data processing is, of course, semiconductors, the transistor-filled chips that power today’s computing industry. For the last several decades, engineers have been able to squeeze more and more transistors onto smaller and smaller silicon wafers — an Intel chip today now squeezes more than 1 billion transistors on a millimeter-sized piece of silicon.
This trend is commonly known as Moore’s Law, for the Intel co-founder Gordon Moore and his famous 1965 observation that the number of transistors on a chip doubles every year (later revised to every two years), thereby doubling the speed and capability of computers.
This exponential growth of power on ever-smaller chips has reliably driven our technology for the past 50 years or so. But Moore’s Law is coming to an end, due to an even more immutable law: material physics. It simply isn’t possible to squeeze more transistors onto the tiny silicon wafers that make up today’s processors.
Compounding matters, the general-purpose chip architecture in wide use today, known as x86, which has brought us to this point, isn’t optimized for computing applications that are now becoming popular.
That means we need a new computing architecture. Or, more likely, multiple new computer architectures. In fact, I predict that over the next few years we will see a flowering of new silicon architectures and designs that are built and optimized for specialized functions, including data intensity, the performance needs of artificial intelligence and machine learning and the low-power needs of so-called edge computing devices.
We’re already seeing the roots of these newly specialized architectures on several fronts. These include Graphic Processing Units from Nvidia, Field Programmable Gate Arrays from Xilinx and Altera (acquired by Intel), smart network interface cards from Mellanox (acquired by Nvidia) and a new category of programmable processor called a Data Processing Unit (DPU) from Fungible, a startup Mayfield invested in. DPUs are purpose-built to run all data-intensive workloads (networking, security, storage) and Fungible combines it with a full-stack platform for cloud data centers that works alongside the old workhorse CPU.
These and other purpose-designed silicon will become the engines for one or more workload-specific applications — everything from security to smart doorbells to driverless cars to data centers. And there will be new players in the market to drive these innovations and adoptions. In fact, over the next five years, I believe we’ll see entirely new semiconductor leaders emerge as these services grow and their performance becomes more critical.
Let’s start with the computing powerhouses of our increasingly connected age: data centers.
More and more, storage and computing are being done at the edge; that means, closer to where our devices need them. These include things like the facial recognition software in our doorbells or in-cloud gaming that’s rendered on our VR goggles. Edge computing allows these and other processes to happen within 10 milliseconds or less, which makes them more work for end users.
I commend the entrepreneurs who are putting the silicon back into Silicon Valley.
With the current arithmetic computations of x86 CPU architecture, deploying data services at scale, or at larger volumes, can be a challenge. Driverless cars need massive, data-center-level agility and speed. You don’t want a car buffering when a pedestrian is in the crosswalk. As our workload infrastructure — and the needs of things like driverless cars — becomes ever more data-centric (storing, retrieving and moving large data sets across machines), it requires a new kind of microprocessor.
Another area that requires new processing architectures is artificial intelligence, both in training AI and running inference (the process AI uses to infer things about data, like a smart doorbell recognizing the difference between an in-law and an intruder). Graphic Processing Units (GPUs), which were originally developed to handle gaming, have proven faster and more efficient at AI training and inference than traditional CPUs.
But in order to process AI workloads (both training and inference), for image classification, object detection, facial recognition and driverless cars, we will need specialized AI processors. The math needed to run these algorithms requires vector processing and floating-point computations at dramatically higher performance than general purpose CPUs provide.
Several startups are working on AI-specific chips, including SambaNova, Graphcore and Habana Labs. These companies have built new AI-specific chips for machine intelligence. They lower the cost of accelerating AI applications and dramatically increase performance. Conveniently, they also provide a software platform for use with their hardware. Of course, the big AI players like Google (with its custom Tensor Processing Unit chips) and Amazon (which has created an AI chip for its Echo smart speaker) are also creating their own architectures.
Finally, we have our proliferation of connected gadgets, also known as the Internet of Things (IoT). Many of our personal and home tools (such as thermostats, smoke detectors, toothbrushes and toasters) operate on ultra-low power.
The ARM processor, which is a family of CPUs, will be tasked for these roles. That’s because gadgets do not require computing complexity or a lot of power. The ARM architecture is perfectly designed for them. It’s made to handle smaller number of computing instructions, can operate at higher speeds (churning through many millions of instructions per second) and do it at a fraction of the power required for performing complex instructions. I even predict that ARM-based server microprocessors will finally become a reality in cloud data centers.
So with all the new work being done in silicon, we seem to be finally getting back to our original roots. I commend the entrepreneurs who are putting the silicon back into Silicon Valley. And I predict they will create new semiconductor giants.
US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.
Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.
The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.
Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.
Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.
“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”
“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”
The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.
Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.
A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.
Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.
Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.
Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.
— Damian Collins (@DamianCollins) August 15, 2019
While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.
In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world. As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.
“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”
Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.
Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.
Amazon will make it easier and more affordable for its third-party sellers to donate their unwanted excess inventory and returns, rather than having the items sent back or destroyed. The company on Wednesday announced the launch of a new program, Fulfillment by Amazon (FBA) Donations, which will distribute excess and returned products to charitable organizations.
The program’s launch follows a series of news reports earlier this year, which found that Amazon warehouses routinely trashed millions of unsold items. One smaller facility in France was even found to have sent 293,000 items to a local dump during a nine-month period. A French TV documentary also claimed that Amazon destroyed more than 3 million products last year.
The documentary had secretly filmed Amazon workers loading brand-new toys, kitchen equipment and flat-screen TVs for transport to the dump, a report said.
While it’s an unfortunately common retail practice to destroy excess or unwanted inventory or returned items — particularly in luxury apparel — at Amazon’s scale, the issue is compounded. In addition, the items being destroyed could make a real impact on people’s lives.
Amazon says it will begin donating products from sellers starting in September in the U.S. and U.K. with the help of charity partners. In the U.S., it’s working with Good360, an organization that partners with retailers and consumer goods companies to source and distribute highly needed products through a network of diverse nonprofits. In the U.K., Amazon is working with Newlife, Salvation Army and Barnardo’s.
Sellers told CNBC the new program makes it cheaper to donate than to dispose or ask for items to be returned, where Amazon charges 50 cents and 15 cents, respectively. The program will also be the new default for sellers, though they can choose to opt-out, if desired.
“We know getting products into the hands of those who need them transforms lives and strengthens local communities,” said Alice Shobe, director, Amazon in the Community, in a statement about the program’s launch. “We are delighted to extend this program to sellers who use our fulfillment services.”
The company also told CNBC that it’s working to bring the number of destroyed items to zero, and said the “vast majority” of returns were resold to other customers, liquidators, returned to suppliers or donated to charities, depending on their condition.
A section in the policy on how the company uses personal data now reads (emphasis ours):
Our processing of personal data for these purposes includes both automated and manual (human) methods of processing. Our automated methods often are related to and supported by our manual methods. For example, our automated methods include artificial intelligence (AI), which we think of as a set of technologies that enable computers to perceive, learn, reason, and assist in decision-making to solve problems in ways that are similar to what people do. To build, train, and improve the accuracy of our automated methods of processing (including AI), we manually review some of the predictions and inferences produced by the automated methods against the underlying data from which the predictions and inferences were made. For example, we manually review short snippets of a small sampling of voice data we have taken steps to de-identify to improve our speech services, such as recognition and translation.
Multiple tech giants’ use of human workers to review users’ audio across a number of products involving AI has grabbed headlines in recent weeks after journalists exposed a practice that had not been clearly conveyed to users in terms and conditions — despite European privacy law requiring clarity about how people’s data is used.
Such workers are typically employed to improve the performance of AI systems by verifying translations and speech in different accents. But, again, this human review component within AI systems has generally been buried rather than transparently disclosed.
Earlier this month a German privacy watchdog told Google it intended to use EU privacy law to order it to halt human reviews of audio captured by its Google Assistant AI in Europe — after press had obtained leaked audio snippets and being able to re-identify some of the people in the recordings.
On learning of the regulator’s planned intervention Google suspended reviews.
Apple also announced it was suspending human reviews of Siri snippets globally, again after a newspaper reported that its contractors could access audio and routinely heard sensitive stuff.
Facebook also said it was pausing human reviews of a speech-to-text AI feature offered in its Messenger app — again after concerns had been raised by journalists.
So far Apple, Google and Facebook have suspended or partially suspended human reviews in response to media disclosures and/or regulatory attention.
While the lead privacy regulator for all three, Ireland’s DPC, has started asking questions.
Microsoft told Motherboard it is not suspending human reviews at this stage.
Users of Microsoft’s voice assistant can delete recordings — but such deletions require action from the user and would be required on a rolling basis as long as the product continues being use. So it’s not the same as having a full and blanket opt out.
We’ve asked Microsoft whether it intends to offer Skype or Cortana users an opt out of their recordings being reviewed by humans.
The company told Motherboard it will “continue to examine further steps we might be able to take”.
U.S. stock markets plummeted today as recession fears continue to grow.
Yesterday’s good news about a reprieve on tariffs for U.S. consumer imports was undone by increasing concerns over economic indicators pointing to a potential global recession coming within the next year.
The Dow Jones Industrial Average dropped more than 800 points on Wednesday — its largest decline of the year — while the S&P 500 fell by 85 points and the tech-heavy Nasdaq dropped 240 points.
The downturn in the markets came a day after the Dow closed up 373 points after the U.S. Trade Representative announced a delay in many of the import taxes the Trump administration planned to impose on Chinese goods.
In the U.S. it was concerns over the news that the yield on 10-year U.S. Treasury notes had dipped below the yield of two-year notes. It’s an indicator that investors think the short-term prospects for a country’s economic outlook are worse than the long-term outlook, so yields are higher for short-term investments.
China’s industrial and retail sectors both slowed significantly in July. Industrial production, including manufacturing, mining and utilities, grew by 4.8% in July (a steep decline from 6.3% growth in June). Meanwhile, retail sales in the country slowed to 7.6%, down from 9.8% in June.
Germany also posted declines over the summer months, indicating that its economy had contracted by 0.1% in the three months leading to June.
Globally, the protracted trade war between the U.S. and China are weighing on economies — as are concerns about what a hard Brexit would mean for the economies in the European Union .
The stocks of Alphabet, Amazon, Apple, Facebook, Microsoft, Netflix and Salesforce were all off by somewhere between 2.5% and 4.5% in today’s trading.
How safe are your secrets? If you used Amazon’s Elastic Block Storage, you might want to check your settings.
New research just presented at the Def Con security conference reveals how companies, startups, and governments are inadvertently leaking their own files from the cloud.
You may have heard of exposed S3 buckets — those Amazon-hosted storage servers packed with customer data but are often misconfigured and inadvertently set to “public” for anyone to access. But you may not have heard about exposed EBS volumes, which poses as much if not a greater risk.
These elastic block storage (EBS) volumes are the “keys to the kingdom,” said Ben Morris, a senior security analyst at cybersecurity firm Bishop Fox, in a call with TechCrunch ahead of his Def Con talk. EBS volumes store all the data for cloud applications. “They have the secret keys to your applications and they have database access to your customers’ information,” he said.
“When you get rid of the hard disk for your computer, you know, you usually shredded or wipe it completely,” he said. “But these public EBS volumes are just left for anyone to take and start poking at.”
He said that all too often cloud admins don’t choose the correct configuration settings, leaving EBS volumes inadvertently public and unencrypted. “That means anyone on the internet can download your hard disk and boot it up, attach it to a machine they control, and then start rifling through the disk to look for any kind of secrets,” he said.
One of Morris’ Def Con slides noting the types of compromised data found using his research, often known as the “Wall of Sheep.” (Image: Ben Morris/Bishop Fox; supplied)
Morris built a tool using Amazon’s own internal volume search feature to query and scrape publicly exposed EBS volumes, then attach it, make a copy and list the contents of the volume on his system.
“If you expose the disk for even just a couple of minutes, our system will pick it up and make it copy of it,” he said.
It took him two months to build up a database of exposed volumes and just a few hundred dollars spent on Amazon cloud resources. Once he validates each volume, he deletes the data.
Morris found dozens of volumes exposed publicly in one region alone, he said, including application keys, critical user or administrative credentials, source code, and more. He found several major companies, including healthcare providers and tech companies.
He also found VPN configurations, which he said could allow him to tunnel into a corporate network. Morris said he did not use any credentials or sensitive data as it would be unlawful.
Among the most damaging things he found, Morris said he found a volume for one government contractor, which he did not name, but provided data storage services to federal agencies. “On their website, they brag about holding this data,” he said, referring to collected intelligence from messages sent to and from the so-called Islamic State terror group to data on border crossings.
“Those are the kind of things I would definitely not want some to be exposed to the public Internet,” he said.
He estimates the figure could be as many as 1,250 exposures across all Amazon cloud regions.
Morris plans to release his proof-of-concept code in the coming weeks.
“I’m giving companies a couple of weeks to go through their own disks and make sure that they don’t have any accidental exposures,” he said.
Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.
A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.
We’ve reached out to Amazon for comment.
Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.
However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…
In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.
These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.
Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.
The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.
Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.
A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.
Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.
Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.
The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.
In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.
At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.
In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:
We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.
FedEx is ending a partnership with Amazon to supply the e-commerce company with ground delivery shipping after its current contract ends this month, the company confirmed to Bloomberg. This is the second contract with Amazon that FedEx has allowed to end without renewal, following a similar decision in June that covered only Express air shipments.
The new contract termination is more significant than the earlier one, in that it means FedEx will not be providing any last-mile delivery service for Amazon, the largest online retailer, in addition to its less sizeable Express air freight. FedEx previously said that Amazon actually makes up less than 1.3% of the shipper’s total revenue, as measured over the year that ended on December 31, 2018.
Amazon is expanding its own shipping capabilities considerably, adding more aircraft to its fleet, and deploying ground-based wheeled delivery robots for last-mile package transportation. The e-commerce giant also recently began its own Delivery Service Partner program to fund and support delivery startup businesses that can help address its need for logistics. It has increasingly relied on its own contracted last-mile delivery services in recent years, and also allocates more of this business to both UPS and USPS than to FedEx, even outside its other offerings.
FedEx did explicitly point out that its Express contract ending had no impact on other aspects of its relationship with Amazon at the time, noting that its international and “other” business units (including ground) weren’t affected. The company also says it’s looking to capitalize on the demand for e-commerce outside of Amazon, and building its network intentionally to “serve thousands of retailers in the e-commerce space.”
Waterloo, Canada-based hardware startup North is a rare bird when it comes to the tech sector: It began life as an entirely different kind of hardware startup as Thalmic Labs in 2012, and launched a major pivot and re-brand in 2018.
The shift included a new name, and an entirely new product focus. It launched its Focals smart glasses last year, and earlier in 2019 sold the tech behind its original product a gesture control armband called Myo, to CTRL-labs.
This kind of system-shocking directional change can cause whiplash at even far less ambitious software startups, but when I spoke to co-founder and CEO Stephen Lake about the change and the company’s new focus, he spoke of the about-face more as a natural evolution long in the making than a late-stage shift.
“It goes way back when we started Thalmic in 2012,” Lake said. “Actually, we were working on our Myo product, which was an input for heads-up displays, VR headsets, etc. We realized back then, when we were pairing it up with the early versions of [Google] Glass and a whole variety of other displays and smart glasses, that the glasses were so far from being the consumer product that we actually wanted to wear and use. And we said, ‘We think directionally this is going to exist, we think there’s this future where we can bring technology with us into the world end up being less distracted, more present, but still get those benefits we get from computing today.’ Instead of the future of staring at screens, or being cut off in like Ready Player One world in the future, actually bringing technology and make it a seamless part of our world.”
Basically, Lake positions the problem as a kind of classic ‘cart before the horse’ dilemma: How could its interface device for a future class of devices achieve meaningful purchase if that class of devices was off to a slower start than anticipated? A less ambitious startup might’ve refocused on innovating accessories for an established device market, but Lake says his company instead took aim at pioneering an entirely new class of consumer device.
Amazon’s Scout six-wheeled, sidewalk driving delivery robots have begun doing deliveries in Southern California, to customers in the Irvine area. Amazon announced this first California deployment of Scout bots in a blog post, noting that in its experience to date, the company has had plenty of opportunity to experience a range of weather conditions in its first deployments in the Pacific Northwest in Seattle – so weather-wise at least, the little blue bot should have a smoother time in sunny CA.
There are only a “small number” of the robots currently deployed, so even if you’re an Irvine resident, don’t necessarily expect to get a glimpse of one just yet. But they will be making their way to customer homes “during daylight hours,” Monday to Friday, per Amazon. They’ll be sent out at random for orders placed by customers through Amazon as usual, regardless of what delivery option you select.
While the robots can drive themselves around, which is the whole point of the project to begin with, for the time being they’ll be accompanied by an ‘Amazon Scout Ambassador .’ These Amazon staff are part diplomat, part research associate for the project, answering questions from people in the neighborhood and also taking note of their reactions. Robots aren’t yet actually interacting with people too much on a daily basis, especially out in the world, so a key part of rolling them out commercially is studying how people interact with them, and think about how those interactions might be altered or improved.
A lot of thought went into the initial Scout design, both in terms of making sure it’s able to survive the many miles it traverses during a day, and in coming up with a design that looks and feels at once approachable but also somewhat bland, so as to quickly evolve from novelty to standard neighborhood background scenery.