Earlier today, Google href="https://twitter.com/searchliaison/status/1220768238490939394?s=21"> announced that it would be redesigning the redesign of its search results as a response to withering criticism from politicians, consumers, and the press over the way in which search results display were made to look like ads.
Google makes money when users of its search service click on ads. It doesn’t make money when people click on an unpaid search result. Making ads look like search results makes Google more money.
It’s also a pretty evil (or at least unethical) business decision by a company whose mantra was “Don’t be evil”(although they gave that up in 2018).
Users began noticing the changes to search results last week and at least one user flagged the changes earlier this week.
There's something strange about the recent design change to google search results, favicons and extra header text: they all look like ads, which is perhaps the point? pic.twitter.com/TlIvegRct1
— Craig Mod (@craigmod) January 21, 2020
Google responded with a bit of doublespeak from its corporate account about how the redesign was intended to achieve the opposite effect of what it was actually doing.
“Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded ‘Ad’ label for ads,” the company wrote.
Virginia’s Senator Mark Warner took a break from impeachment hearings to talk to the Washington Post about just how bad the new search redesign was.
“We’ve seen multiple instances over the last few years where Google has made paid advertisements ever more indistinguishable from organic search results,” Warner told the Post. “This is yet another example of a platform exploiting its bottleneck power for commercial gain, to the detriment of both consumers and also small businesses.”
Google’s changes to its search results happened despite the fact that the company is already being investigated by every state in the country for antitrust violations.
For Google, the rationale is simple. The company’s advertising revenues aren’t growing the way they used to, and the company is looking at a slowdown in its core business. To try and juice the numbers, dark patterns present an attractive way forward.
Indeed, Google’s using the same tricks that it once battled to become the premier search service in the U.S. When the company first launched its search service, ads were clearly demarcated and separated from actual search results returned by Google’s algorithm. Over time, the separation between what was an ad and what wasn’t became increasingly blurred.
— Ginny Marvin (@GinnyMarvin) July 25, 2016
“Search results were near-instant and they were just a page of links and summaries – perfection with nothing to add or take away,” user experience expert Harry Brignull (and founder of the watchdog website darkpatterns.org) said of the original Google search results in an interview with TechCrunch.
“The back-propagation algorithm they introduced had never been used to index the web before, and it instantly left the competition in the dust. It was proof that engineers could disrupt the rules of the web without needing any suit-wearing executives. Strip out all the crap. Do one thing and do it well.”
“As Google’s ambitions changed, the tinted box started to fade. It’s completely gone now,” Brignull added.
The company acknowledged that its latest experiment might have gone too far in its latest statement and noted that it will “experiment further” on how it displays results.
Here’s our full statement on why we’re going to experiment further. Our early tests of the design for desktop were positive. But we appreciate the feedback, the trust people place in Google, and we’re dedicating to improving the experience. pic.twitter.com/gy9PwcLqHj
— Google SearchLiaison (@searchliaison) January 24, 2020
Huawei may have just found itself an ally in the most unexpected of places. According to a new report out of The Wall Street Journal, both the Defense and Treasury Departments are pushing back on a Commerce Department-led ban on sales from the embattled Chinese hardware giant.
That move, in turn, has reportedly led Commerce Department officials to withdraw a proposal set to make it even more difficult for U.S.-based companies to work with Huawei.
Defense Secretary Mark Esper struck a fittingly pragmatic tone while speaking with the paper, noting, “We have to be conscious of sustaining those [technology] companies’ supply chains and those innovators. That’s the balance we have to strike.”
Huawei, already under fire for allegations of flouting sanctions with other countries, has become a centerpiece of a simmering trade war between the Trump White House and China. The smartphone maker has been barred from selling 5G networking equipment due to concerns over its close ties to the Chinese government.
Last year, meanwhile, the government barred Huawei from utilizing software and components from U.S.-based companies, including Google. Huawei is also expected to be a key talking point in upcoming White House discussions, as officials weigh actions against the repercussions they’ll ultimately have for U.S. partners.
The Commerce Department has yet to offer any official announcement related to the report.
Fintech companies are fundamentally changing how the financial services ecosystem operates, giving consumers powerful tools to help with savings, budgeting, investing, insurance, electronic payments and many other offerings. This industry is growing rapidly, filling gaps where traditional banks and financial institutions have failed to meet customer needs.
Yet progress has been uneven. Notably, consumer fintech adoption in the United States lags well behind much of Europe, where forward-thinking regulation has sparked an outpouring of innovation in digital banking services — as well as the backend infrastructure onto which products are built and operated.
That might seem counterintuitive, as regulation is often blamed for stifling innovation. Instead, European regulators have focused on reducing barriers to fintech growth rather than protecting the status quo. For example, the U.K.’s Open Banking regulation requires the country’s nine big high-street banks to share customer data with authorized fintech providers.
The EU’s PSD2 (Payment Services Directive 2) obliges banks to create application programming interfaces (APIs) and related tools that let customers share data with third parties. This creates standards that level the playing field and nurture fintech innovation. And the U.K.’s Financial Conduct Authority supports new fintech entrants by running a “sandbox” for software testing that helps speed new products into service.
Regulations, if implemented effectively as demonstrated by those in Europe, will lead to a net positive to consumers. While it is inevitable that regulations will come, if fintech entrepreneurs take the action to engage early and often with regulators, it will ensure that the regulations put in place support innovation and ultimately benefit the consumer.
Google today announced that Dataset Search, a service that lets you search for close to 25 million different publicly available datasets, is now out of beta. Dataset Search first launched in September 2018.
Researchers can use these datasets, which range from pretty small ones that tell you how many cats there were in the Netherlands from 2010 to 2018 to large annotated audio and image sets, to check their hypotheses or train and test their machine learning models. The tool currently indexes about 6 million tables.
With this release, Dataset Search is getting a mobile version and Google is also adding a few new features to Dataset Search. The first of these is a new filter that lets you choose which type of dataset you want to see (tables, images, text, etc.), which makes it easier to find the right data you’re looking for. In addition, the company has added more information about the datasets and the organizations that publish them.
A lot of the data in the search index comes from government agencies. In total, Google says, there are about 2 million U.S. government datasets in the index right now. But you’ll also regularly find Google’s own Kaggle show up, as well as a number of other public and private organizations that make public data available as well.
As Google notes, anybody who owns an interesting dataset can make it available to be indexed by using a standard schema.org markup to describe the data in more detail.
Waymo said Thursday it will begin mapping and eventually testing its autonomous long-haul trucks in Texas and parts of New Mexico, the latest sign that the Alphabet company is expanding beyond its core focus of launching a robotaxi business.
Waymo said in a tweet posted early Thursday it had picked these areas because they are “interesting and promising commercial routes.” Waymo also said it would “explore how the Waymo Driver” — the company’s branded self-driving system — could be used to “create new transportation solutions.”
Waymo plans to mostly focus on interstates because Texas has a particularly high freight volume, the company said. The program will begin with mapping conducted by Waymo’s Chrysler Pacifica minivans.
The mapping and eventual testing will occur on highways around Dallas, Houston and El Paso. In New Mexico, Waymo will focus on the southern most part of the state.
Interstate 10 will be a critical stretch of highway in both states — and one that is already a testbed for TuSimple, a self-driving trucking startup that has operations in Tucson and San Diego. TuSimple tests and carries freight along the Tucson to Phoenix corridor on I-10. The company also tests on I-10 in New Mexico and Texas.
This week, we’ll start driving our Chrysler Pacificas and long-haul trucks in Texas and New Mexico. These are interesting and promising commercial routes, and we’ll be using our vehicles to explore how the Waymo Driver might be able to create new transportation solutions. pic.twitter.com/uDqKDrGR9b
— Waymo (@Waymo) January 23, 2020
Waymo, which is best known for its pursuit of a robotaxi service, integrated its self-driving system into Class 8 trucks and began testing them in Arizona in August 2017. The company stopped testing its trucks on Arizona roads sometime later that year. The company brought back its truck testing to Arizona in May 2019.
Those early Arizona tests were aimed at gathering initial information about driving trucks in the region, while the new round of truck testing in Arizona marks a more advanced stage in the program’s development, Waymo said at the time.
Waymo has been testing its self-driving trucks in a handful of locations in the U.S., including Arizona, the San Francisco area and Atlanta. In 2018, the company announced plans to use its self-driving trucks to deliver freight bound for Google’s data centers in Atlanta.
Time is supposed to make technology better. The idea is simple: With more time, humans make newer, better technology and our lives improve. Except for when the opposite happens.
Google is a good example of this. I’ve been harping on the matter for a while now. Google mobile search, in case you haven’t used it lately, is bad. It often returns bloated garbage that looks like a cross between new Yahoo and original Bing.
Here’s how it butchered a search query for “Metallica” this morning:
Remember when that interface was simpler, and easier to use, and didn’t try to do literally every possible thing for every possible user at once?
It’s not just Google’s mobile search interface that makes me want to claw my eyes out and learn how to talk to trees. Everyone now knows that Mountain View has effectively given up on trying to distinguish ads from organic results (Does the company view them as interchangeable? Probably?). TechCrunch’s Natasha Lomas covered the company’s recent search result design changes today, calling them “user-hostile,” going on to summarize the choices as its “latest dark pattern.”
Google, once fanatical about super-clean, fast results is now trying to help you way too much on mobile and fool you on Chrome.
I’d also throw TweetDeck into the mix. It’s garbage slow and lags and sucks RAM. Twitter has effectively decided that its power users are idiots who don’t deserve good code. Oh, and Twitter is deprecating some cool analytics features it used to give out to users about their followers.
Chrome and TweetDeck are joined by apps like Slack that are also slowing down over time. It appears that as every developer writes code on a computer with 64,000 gigs of RAM, they presume that they can waste everyone else’s. God forbid if you have the piddling 16 gigs of RAM that my work machine has. Your computer is going to lag and often crash. Great work, everyone!
Also, fuck mobile apps. I have two phones now because that’s how 2020 works and I have more apps than I know what to do with, not to mention two different password managers, Okta and more.I’m so kitted out I can’t breathe. I have so many tools available to me I mostly just want to put them all down. Leave me alone! Or only show me the thing I need — not everything at once!
Anyhoo video games are still pretty good as long as you avoid most Battle Royale titles, micropayments, and EA. Kinda.
NASA has finalized the payloads for its first cargo deliveries scheduled to be carried by commercial lunar landers, vehicles created by companies the agency selected to take part in its Commercial Lunar Payload Services (CLPS) program. In total, there are 16 payloads, which consist of a number of different science experiments and technology experiments, that will be carried by landers built by Astrobotic and Intuitive Machines. Both of these landers are scheduled to launch next year, carrying their cargo to the Moon’s surface and helping prepare the way for NASA’s mission to return humans to the Moon by 2024.
Astrobotic’s Peregrine is set to launch aboard a rocket provided by the United Launch Alliance (ULA), while Intuitive Machines’ Nova-C lander will make its own lunar trip aboard a SpaceX Falcon 9 rocket. Both landers will carry two of the payloads on the list, including a Laser Retro-Reflector Array (LRA) that is basically a mirror-based precision location device for situating the lander itself; and a Navigation Doppler Lidar for Precise Velocity and Range Sensing (NDL) – a laser-based sensor that can provide precision navigation during descent and touchdown. Both of these payloads are being developed by NASA to ensure safe, controlled and specifically targeted landing of spacecraft on the Moon’s surface, and their use here be crucial in building robust lunar landing systems to support Artemis through the return of human astronauts to the Moon and beyond.
Besides those two payloads, everything else on either lander is unique to one vehicle or the other. Astrobotic is carrying more, but its Peregrine lander can hold more cargo – its payload capacity tops out at around 585 lbs, whereas the Nova-C can carry a maximum of 220 lbs. The full list of what each lander will have on board is available below, as detailed by NASA.
Overall, NASA has 14 total contractors that could potentially provide lunar payload delivery services through its CLPS program. That basically amounts to a list of approved vendors, who then bid on whatever contracts the agency has available for this specific need. Other companies on the CLPS list include Blue Origin, Lockheed Martin, SpaceX and more. Starting with these two landers next year, NASA hopes to fly around two missions per year each year through the CLPS program.
- Surface Exosphere Alterations by Landers (SEAL): SEAL will investigate the chemical response of lunar regolith to the thermal, physical and chemical disturbances generated during a landing, and evaluate contaminants injected into the regolith by the landing itself. It will give scientists insight into the how a spacecraft landing might affect the composition of samples collected nearby. It is being developed at NASA Goddard.
- Photovoltaic Investigation on Lunar Surface (PILS): PILS is a technology demonstration that is based on an International Space Station test platform for validating solar cells that convert light to electricity. It will demonstrate advanced photovoltaic high-voltage use for lunar surface solar arrays useful for longer mission durations. It is being developed at Glenn Research Center in Cleveland.
- Linear Energy Transfer Spectrometer (LETS): The LETS radiation sensor will collect information about the lunar radiation environment and relies on flight-proven hardware that flew in space on the Orion spacecraft’s inaugural uncrewed flight in 2014. It is being developed at NASA Johnson.
- Near-Infrared Volatile Spectrometer System (NIRVSS): NIRVSS will measure surface and subsurface hydration, carbon dioxide and methane – all resources that could potentially be mined from the Moon — while also mapping surface temperature and changes at the landing site. It is being developed at Ames Research Center in Silicon Valley, California.
- Mass Spectrometer Observing Lunar Operations (MSolo): MSolo will identify low-molecular weight volatiles. It can be installed to either measure the lunar exosphere or the spacecraft outgassing and contamination. Data gathered from MSolo will help determine the composition and concentration of potentially accessible resources. It is being developed at Kennedy Space Center in Florida.
- PROSPECT Ion-Trap Mass Spectrometer (PITMS) for Lunar Surface Volatiles: PITMS will characterize the lunar exosphere after descent and landing and throughout the lunar day to understand the release and movement of volatiles. It was previously developed for ESA’s (European Space Agency) Rosetta mission and is being modified for this mission by NASA Goddard and ESA.
- Neutron Spectrometer System (NSS): NSS will search for indications of water-ice near the lunar surface by measuring how much hydrogen-bearing materials are at the landing site as well as determine the overall bulk composition of the regolith there. NSS is being developed at NASA Ames.
- Neutron Measurements at the Lunar Surface (NMLS): NMLS will use a neutron spectrometer to determine the amount of neutron radiation at the Moon’s surface, and also observe and detect the presence of water or other rare elements. The data will help inform scientists’ understanding of the radiation environment on the Moon. It’s based on an instrument that currently operates on the space station and is being developed at Marshall Space Flight Center in Huntsville, Alabama.
- Fluxgate Magnetometer (MAG): MAG will characterize certain magnetic fields to improve understanding of energy and particle pathways at the lunar surface. NASA Goddard is the lead development center for the MAG payload.
Intuitive Machines Payloads
- Lunar Node 1 Navigation Demonstrator (LN-1): LN-1 is a CubeSat-sized experiment that will demonstrate autonomous navigation to support future surface and orbital operations. It has flown on the space station and is being developed at NASA Marshall.
- Stereo Cameras for Lunar Plume-Surface Studies (SCALPSS): SCALPSS will capture video and still image data of the lander’s plume as the plume starts to impact the lunar surface until after engine shut off, which is critical for future lunar and Mars vehicle designs. It is being developed at NASA Langley, and also leverages camera technology used on the Mars 2020 rover.
- Low-frequency Radio Observations for the Near Side Lunar Surface (ROLSES): ROLSES will use a low-frequency radio receiver system to determine photoelectron sheath density and scale height. These measurements will aide future exploration missions by demonstrating if there will be an effect on the antenna response or larger lunar radio observatories with antennas on the lunar surface. In addition, the ROLSES measurements will confirm how well a lunar surface-based radio observatory could observe and image solar radio bursts. It is being developed at NASA Goddard.
Did you notice a recent change to how Google search results are displayed on the desktop?
I noticed something last week — thinking there must be some kind of weird bug messing up the browser’s page rendering because suddenly everything looked similar: A homogenous sea of blue text links and favicons that, on such a large expanse of screen, come across as one block of background noise.
I found myself clicking on an ad link — rather than the organic search result I was looking for.
Here, for example, are the top two results for a Google search for flight search engine ‘Kayak’ — with just a tiny ‘Ad’ label to distinguish the click that will make Google money from the click that won’t…
Turns out this is Google’s latest dark pattern: The adtech giant has made organic results even more closely resemble the ads it serves against keyword searches, as writer Craig Mod was quick to highlight in a tweet this week.
There's something strange about the recent design change to google search results, favicons and extra header text: they all look like ads, which is perhaps the point? pic.twitter.com/TlIvegRct1
— Craig Mod (@craigmod) January 21, 2020
Last week, in its own breezy tweet, Google sought to spin the shift as quite the opposite — saying the “new look” presents “site domain names and brand icons prominently, along with a bolded ‘Ad’ label for ads”:
Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded “Ad” label for ads. Here’s a mockup: pic.twitter.com/aM9UAbSKtv
— Google SearchLiaison (@searchliaison) January 13, 2020
But Google’s explainer is almost a dark pattern in itself.
If you read the text quickly you’d likely come away with the impression that it has made organic search results easier to spot since it’s claiming components of these results now appear more “prominently” in results.
Yet, read it again, and Google is essentially admitting that a parallel emphasis is being placed — one which, when you actually look at the thing, has the effect of flattening the visual distinction between organic search results (which consumers are looking for) and ads (which Google monetizes).
Another eagle-eyed user Twitter, going by the name Luca Masters, chipped into the discussion generated by Mod’s tweet — to point out that the tech giant is “finally coming at this from the other direction”.
They're finally coming at this from the other direction:https://t.co/XYkHjVrE8X
— Luca K. B. Masters (@lkbm) January 21, 2020
‘This’ being deceptive changes to ad labelling; and ‘other direction’ being a reference to how now it’s organic search results being visually tweaked to shrink their difference vs ads.
Google previously laid the groundwork for this latest visual trickery by spending earlier years amending the look of ads to bring them closer in line with the steadfast, cleaner appearance of genuine search results.
Except now it’s fiddling with those too. Hence ‘other direction’.
Masters helpfully quote-tweeted this vintage tweet (from 2016), by journalist Ginny Marvin — which presents a visual history of Google ad labelling in search results that’s aptly titled “color fade”; a reference to the gradual demise of the color-shaded box Google used to apply to clearly distinguish ads in search results.
Those days are long gone now, though.
— Ginny Marvin (@GinnyMarvin) July 25, 2016
Now a user of Google’s search engine has — essentially — only a favicon between them and an unintended ad click. Squint or you’ll click it.
This visual trickery may be fractionally less confusing in a small screen mobile environment — where Google debuted the change last year. But on a desktop screen these favicons are truly minuscule. And where to click to get actual information starts to feel like a total lottery.
A lottery that’s being stacked in Google’s favor because confused users are likely to end up clicking more ad links than they otherwise would, meaning it cashes in at the expense of web users’ time and energy.
Back in May, when Google pushed this change on mobile users, it touted the tweaks as a way for sites to showcase their own branding, instead of looking like every other blue link on a search result page. But it did so while simultaneously erasing a box-out that it had previously displayed around the label ‘Ad’ to make it stand out.
That made it “harder to differentiate ads and search results,” as we wrote then — predicting it will “likely lead to outcry”.
There were certainly complaints then. And there will likely be more now — given the visual flattening of the gap between ad clicks and organic links looks even more confusing for users of Google search on desktop.
We reached out to Google to ask for a response to the latest criticism that the new design for search results makes it almost impossible to distinguish between organic results and ads. But the company ignored repeat requests for comment.
Of course it’s true that plenty of UX design changes face backlash, especially early on. Change in the digital realm is rarely instantly popular. It’s usually more ‘slow burn’ acceptance.
But there’s no consumer-friendly logic to this one. (And the slow burn going on here involves the user being cast in the role of the metaphorical frog.)
Instead, Google is just making it harder for web users to click on the page they’re actually looking for — because, from a revenue-generating perspective, it prefers them to click an ad.
It’s the visual equivalent of a supermarket putting a similarly packaged own-brand right next to some fancy branded shampoo on the shelf — in the hopes a rushed shopper will pluck the wrong one. (Real life dark patterns are indeed a thing.)
It’s also a handy illustration of quite how far away from the user Google’s priorities have shifted, and continue to drift.
“When Google introduced ads, they were clearly marked with a label and a brightly tinted box,” says UX specialist Harry Brignull. “This was in stark contrast to all the other search engines at the time, who were trying to blend paid listings in amongst the organic ones, in an effort to drive clicks and revenue. In those days, Google came across as the most honest search engine on the planet.”
Brignull is well qualified to comment on dark patterns — having been calling out deceptive design since 2010 when he founded darkpatterns.org.
“I first learned about Google in the late 1990s. In those days you learned about the web by reading print magazines, which is charmingly quaint to look back on. I picked up a copy of Wired Magazine and there it was – a sidebar talking about a new search engine called ‘Google’,” he recalled. “Google was amazing. In an era of portals, flash banners and link directories, it went in the opposite direction. It didn’t care about the daft games the other search engines were playing. It didn’t even seem to acknowledge they existed. It didn’t even seem to want to be a business. It was a feat of engineering, and it felt like a public utility.
“The original Google homepage was recognised a guiding light of purism in digital design. Search was provided by an unstyled text field and button. There was nothing else on the homepage. Just the logo. Search results were near-instant and they were just a page of links and summaries – perfection with nothing to add or take away. The back-propagation algorithm they introduced had never been used to index the web before, and it instantly left the competition in the dust. It was proof that engineers could disrupt the rules of the web without needing any suit-wearing executives. Strip out all the crap. Do one thing and do it well.”
“As Google’s ambitions changed, the tinted box started to fade. It’s completely gone now,” Brignull added.
The one thing Google very clearly wants to do well now is serve more ads. It’s chosen to do that deceptively, by steadily — and consistently — degrading the user experience. So a far cry from “public utility”.
And that user-friendly Google of old? Yep, also completely gone.
Internet services company Opera has come under a short-sell assault based on allegations of predatory lending practices by its fintech products in Africa.
Hindenburg Research issued a report claiming (among other things) that Opera’s finance products in Nigeria and Kenya have run afoul of prudent consumer practices and Google Play Store rules for lending apps.
Hindenburg — which is based in NYC and managed by financial analyst Nate Anderson — went on to suggest Opera’s U.S. listed stock was grossly overvalued.
That’s a primer on the key info, though there are several additional shades of the who, why, and where of this story to break down, before getting to what Opera and Hindenburg had to say.
A good start is Opera’s ownership and scope. Founded in Norway, the company is an internet services provider, largely centered around its Opera browser.
Two years later, Opera went public in an IPO on NASDAQ, where its shares currently trade.
Though Opera’s web platform isn’t widely used in the U.S. — where it has less than 1% of the browser market — it has been number-one in Africa, and more recently a distant second to Chrome, according to StatCounter.
On the back of its browser popularity, Opera went on an African venture-spree in 2019, introducing a suite of products and startup verticals in Nigeria and Kenya, with intent to scale more broadly across the continent.
In Nigeria these include motorcycle ride-hail service ORide and delivery app OFood.
Central to these services are Opera’s fintech apps: OPay in Nigeria and OKash and Opesa in Kenya — which offer payment and lending options.
Fintech focused VC and startups have been at the center of a decade long tech-boom in several core economies in Africa, namely Kenya and Nigeria.
In 2019 Opera led a wave of Chinese VC in African fintech, including $170 million in two rounds to its OPay payments service in Nigeria.
Opera’s fintech products in Africa (as well as Opera’s Cashbean in India) are at the core of Hindenburg Research’s brief and short-sell position.
The crux of the Hindenburg report is that due to the declining market-share of its browser business, Opera has pivoted to products generating revenue from predatory short-term loans in Africa and India at interest rates of 365 to 876%, so Hindenburg claims.
The firm’s reporting goes on to claim Opera’s payment products in Nigeria and Kenya are afoul of Google rules.
“Opera’s short-term loan business appears to be…in violation of the Google Play Store’s policies on short-term and misleading lending apps…we think this entire line of business is at risk of…being severely curtailed when Google notices and ultimately takes corrective action,” the report says.
Based on this, Hindenburg suggested Opera’s stock should trade at around $2.50, around a 70% discount to Opera’s $9 share-price before the report was released on January 16.
Hindenburg also disclosed the firm would short Opera.
Founder Nate Anderson confirmed to TechCrunch Hindenburg continues to hold short positions in Opera’s stock — which means the firm could benefit financially from declines in Opera’s share value. The company’s stock dropped some 18% the day the report was published.
On motivations for the brief, “Technology has catalyzed numerous positive changes in Africa, but we do not think this is one of them,” he said.
“This report identified issues relating to one company, but what we think will soon become apparent is that in the absence of effective local regulation, predatory lending is becoming pervasive across Africa and Asia…proliferated via mobile apps,” Anderson added.
While the bulk of Hindenburg’s critique was centered on Opera, Anderson also took aim at Google.
“Google has become the primary facilitator of these predatory lending apps by virtue of Android’s dominance in these markets. Ultimately, our hope is that Google steps up and addresses the bigger issue here,” he said.
TechCrunch has an open inquiry into Google on the matter. In the meantime, Opera’s apps in Nigeria and Kenya are still available on GooglePlay, according to Opera and a cursory browse of the site.
For its part, Opera issued a rebuttal to Hindenburg and offered some input to TechCrunch through a spokesperson.
In a company statement opera said, “We have carefully reviewed the report published by the short seller and the accusations it put forward, and our conclusion is very clear: the report contains unsubstantiated statements, numerous errors, and misleading conclusions regarding our business and events related to Opera.”
Opera added it had proper banking licenses in Kenyan or Nigeria. “We believe we are in compliance with all local regulations,” said a spokesperson.
TechCrunch asked Hindenburg’s Nate Anderson if the firm had contacted local regulators related to its allegations. “We reached out to the Kenyan DCI three times before publication and have not heard back,” he said.
As it pertains to Africa’s startup scene, there’ll be several things to follow surrounding the Opera, Hindenburg affair.
The first is how it may impact Opera’s business moves in Africa. The company is engaged in competition with other startups across payments, ride-hail, and several other verticals in Nigeria and Kenya. Being accused of predatory lending, depending on where things go (or don’t) with the Hindenburg allegations, could put a dent in brand-equity.
There’s also the open question of if/how Google and regulators in Kenya and Nigeria could respond. Contrary to some perceptions, fintech regulation isn’t non-existent in both countries, neither are regulators totally ineffective.
Kenya passed a new data-privacy law in November and Nigeria recently established guidelines for mobile-money banking licenses in the country, after a lengthy Central Bank review of best digital finance practices.
Nigerian regulators demonstrated they are no pushovers with foreign entities, when they slapped a $3.9 billion fine on MTN over a regulatory breach in 2015 and threatened to eject the South African mobile-operator from the country.
As for short-sellers in African tech, they are a relatively new thing, largely because there are so few startups that have gone on to IPO.
In 2019, Citron Research head and activist short-seller Andrew Left — notable for shorting Lyft and Tesla — took short positions in African e-commerce company Jumia, after dropping a report accusing the company of securities fraud. Jumia’s share-price plummeted over 50% and has only recently begun to recover.
As of Wednesday, there were signs Opera may be shaking off Hindenburg’s report — at least in the market — as the company’s shares had rebounded to $7.35.
Google is giving an A.I. upgrade to its Collections feature — basically Google’s own take on Pinterest, but built into Google Search. Originally a name given to organizing images, the Collections feature that launched in 2018 let you save any type of search result — images, bookmarks, or maps locations — into groups called “Collections” for later perusal. Starting today, Google will make suggestions about items you can add to Collections based on your Search history across specific activities like cooking, shopping or hobbies.
The idea here is that people often use Google for research but don’t remember to save web pages for easy retrieval. That leads to users to dig through their Google Search History in an effort to find the lost page. Google believes that A.I. smarts can improve the process, by helping users to build reference collections by starting the process for them.
Here’s how it works. After you’ve visited pages on Google Search in the Google app or on the mobile web, Google will group together similar pages related to things like cooking, shopping, and hobbies then prompt you to save them to suggested Collections.
For example, after an evening of scouring the web for recipes, Google may share a suggested Collection with you titled “Dinner Party” which is auto-populated with relevant pages from your Search history. You can uncheck any recipes that don’t belong and rename the collection from “Dinner Party” to something else of your choosing, if you want. You then tap the “Create” button to turn this selection from your Search history into a Collection.
These Collections can be found later in the Collections tab in the Google app or through the Google.com side menu on the mobile web. There is an option to turn off this feature in Settings, but it’s enabled by default.
The Pinterest-like feature aims to keep Google users from venturing off Google sites to other places where they can save and organize things they’re interested in — whether that’s a list of recipes they want to add to a pinboard on Pinterest or a list of clothing they want to add to a wish list on Amazon. In particular, retaining e-commerce shoppers from leaving Google for Amazon is something the company is heavily focused on these days. The company recently rolled out a big revamp of its Google Shopping vertical and just this month launched a way to shop directly from search results.
The issue with sites like Pinterest is that they’re capturing shoppers at an earlier stage in the buying process — during the information-gathering and inspiration-seeking research stage, that is. By saving links to Pinterest’s pinboards, shoppers ready to make a purchase are bypassing Google (and its advertisers) to check out directly with retailers.
Meanwhile, Google is simultaneously losing traffic to Amazon, which now surpasses Google for product searches. Even Instagram, of all places, has become a rival, as it’s now a place to shop. The app’s Shopping feature is funneling users right from its visual ads to a checkout page in the app. PayPal, catching wind of this trend, recently spent $4 billion to buy Honey in order to capture shoppers earlier in their journey.
For users, Google Collections is just about encouraging you to put your searches into groups for later access. But for Google, it’s also about getting people to shop on Google and stay on Google, no matter what they’re researching. Suggested Collections may lure you in as an easy way to organize recipes, but ultimately this feature will be about getting users to develop a habit of saving their searches to Google — and particularly their product searches.
Once you have a Collection set up, Google can point you to other related items, including websites, images, and more. Most importantly, this will serve as a new way to get users to perform more product searches, too, as it can send users to other product pages without the user having to type in an explicit search query.
The update also comes with an often-requested collaboration feature, which means you can now share a collection with others for either viewing or editing.
Sharing and related content suggestions are live worldwide.
The A.I.-powered suggested collections are live in the U.S. for English users starting today and will reach more markets in time.
Google Cloud today announced Secret Manager, a new tool that helps its users securely store their API keys, passwords, certificates and other data. With this, Google Cloud is giving its users a single tool to manage this kind of data and a centralized source of truth, something that even sophisticated enterprise organizations often lack.
“Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication,” Google developer advocate Seth Vargo and product manager Matt Driscoll wrote in today’s announcement. “Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.”
With Berglas, Google already offered an open-source command-line tool for managing secrets. Secret Manager and Berglas will play well together and users will be able to move their secrets from the open-source tool into Secret Manager and use Berglas to create and access secrets from the cloud-based tool as well.
With KMS, Google also offers a fully managed key management system (as do Google Cloud’s competitors). The two tools are very much complementary. As Google notes, KMS does not actually store the secrets — it encrypts the secrets you store elsewhere. The secret Manager provides a way to easily store (and manage) these secrets in Google Cloud.
Secret Manager includes the necessary tools for managing secret versions and audit logging, for example. Secrets in Secret Manager are also project-based global resources, the company stresses, while competing tools often feature manage secrets on a regional basis.
The new tool is now in beta and available to all Google Cloud customers.
Google’s strategy for bringing new customers to its cloud is to focus on the enterprise and specific verticals like healthcare, energy, financial service and retail, among others. It’s healthcare efforts recently experienced a bit of a setback, with Epic now telling its customers that it is not moving forward with its plans to support Google Cloud, but in return, Google now got to announce two new customers in the travel business: Lufthansa Group, the world’s largest airline group by revenue, and Sabre, a company that provides backend services to airlines, hotels and travel aggregators.
For Sabre, Google Cloud is now the preferred cloud provider. Like a lot of companies in the travel (and especially the airline) industry, Sabre runs plenty of legacy systems and is currently in the process of modernizing its infrastructure. To do so, it has now entered a 10-year strategic partnership with Google “to improve operational agility while developing new services and creating a new marketplace for its airline, hospitality and travel agency customers.” The promise, here, too, is that these new technologies will allow the company to offer new travel tools for its customers.
When you hear about airline systems going down, it’s often Sabre’s fault, so just being able to avoid that would already bring a lot of value to its customers.
“At Google we build tools to help others, so a big part of our mission is helping other companies realize theirs. We’re so glad that Sabre has chosen to work with us to further their mission of building the future of travel,” said Google CEO Sundar Pichai . “Travelers seek convenience, choice and value. Our capabilities in AI and cloud computing will help Sabre deliver more of what consumers want.”
The same holds true for Google’s deal with Lufthansa Group, which includes German flag carrier Lufthansa itself, but also subsidiaries like Austrian, Swiss, Eurowings and Brussels Airlines, as well as a number of technical and logistics companies that provide services to various airlines.
“By combining Google Cloud’s technology with Lufthansa Group’s operational expertise, we are driving the digitization of our operation even further,” said Dr. Detlef Kayser, Member of the Executive Board of the Lufthansa Group. “This will enable us to identify possible flight irregularities even earlier and implement countermeasures at an early stage.”
Lufthansa Group has selected Google as a strategic partner to “optimized its operations performance.” A team from Google will work directly with Lufthansa to bring this project to life. The idea here is to use Google Cloud to build tools that help the company run its operations as smoothly as possible and to provide recommendations when things go awry due to bad weather, airspace congestion or a strike (which seems to happen rather regularly at Lufthansa these days).
Delta recently launched a similar platform to help its employees.
On Anbox Cloud, Android becomes the guest operating system that runs containerized applications. This opens up a range of use cases, ranging from bespoke enterprise apps to cloud gaming solutions.
The result is similar to what Google does with Android apps on Chrome OS, though the implementation is quite different and is based on the LXD container manager, as well as a number of Canonical projects like Juju and MAAS for provisioning the containers and automating the deployment. “LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity,” the company points out in its announcements.
Anbox itself, it’s worth noting, is an open-source project that came out of Canonical and the wider Ubuntu ecosystem. Launched by Canonical engineer Simon Fels in 2017, Anbox runs the full Android system in a container, which in turn allows you to run Android application on any Linux-based platform.
What’s the point of all of this? Canonical argues that it allows enterprises to offload mobile workloads to the cloud and then stream those applications to their employees’ mobile devices. But Canonical is also betting on 5G to enable more use cases, less because of the available bandwidth but more because of the low latencies it enables.
“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, director of Product at Canonical, in today’s announcement. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”
Outside of the enterprise, one of the use cases that Canonical seems to be focusing on is gaming and game streaming. A server in the cloud is generally more powerful than a smartphone, after all, though that gap is closing.
Canonical also cites app testing as another use case, given that the platform would allow developers to test apps on thousands of Android devices in parallel. Most developers, though, prefer to test their apps in real — not emulated — devices, given the fragmentation of the Android ecosystem.
Anbox Cloud can run in the public cloud, though Canonical is specifically partnering with edge computing specialist Packet to host it on the edge or on-premise. Silicon partners for the project are Ampere and Intel .
One of the big questions I got around the time the Apple Card launched was whether you’d be able to download a file of your transactions to either work with manually or import into a piece of expenses management software. The answer, at the time, was no.
Now Apple is announcing that Apple Card users will be able to export monthly transactions to a downloadable spreadsheet that they can use with their personal budgeting apps or sheets.
When I shot out a request for recommendations for a Mint replacement for my financing and budgeting, a lot of the responses showed just how spreadsheet-oriented many of the tools on the market are. Mint accepts imports, as do others like Clarity Money, YNAB and Lunch Money. As do, of course, personal solutions rolled in Google Sheets or other spreadsheet programs.
The one rec I got the most and which I’m trying out right now, Copilot, does not currently support importing spreadsheets, but founder Andres Ugarte told me it’s on their list to add. Ugarte told me that they’re happy to see the download feature appear because it lets users monitor their finances on their own terms. “Apple Card support has been a top request from our users, so we are very excited to provide a way for them to import their data into Copilot .”
Here’s how to export a spreadsheet of your monthly transactions:
If you don’t yet have a monthly statement, you won’t see this feature until you do. The last step brings up a standard share sheet letting you email or send the file however you normally would. The current format is CSV, but in the near future you’ll get an OFX option as well.
So if you’re using one of the tools (or spreadsheet setups) that would benefit from being able to download a monthly statement of your Apple Card transactions, then you’re getting your wish from the Apple Card team today. If you use a tool that requires something more along the lines of API-level access, like something using Plaid or another account linking-centric tool, then you’re going to have to wait longer.
No info from Apple on when that will arrive, if at all, but I know that the team is continuing to launch new features, so my guess is that this is coming at some point.
Finding the right product/market fit is challenging for any company, but it’s just a little harder for hardware startups.
I recently visited the San Francisco offices of Nebia to chat with co-founder and CEO Philip Winter, whose eco-friendly hardware startup has received funding from Apple CEO Tim Cook, former Google CEO Eric Schmidt and Fitbit CEO James Park. After checking out the company’s latest shower head, we eased into a discussion about the opportunities and challenges facing hardware startups in Silicon Valley today.
TechCrunch: What’s so hard about hardware in 2020?
Philip Winter: The hardware landscape was, at one point, super-hot, at least in Silicon Valley. I would say like three or four years ago. A lot of companies came out with breakout products and a lot of them disappeared over the years since then. A lot of them are our peers — it’s a fairly small community.
The analysts at Gartner have published their annual global device forecast, and while 2020 looks like it may be partly sunny, get ready for more showers and poor weather ahead. The analysts predict that a bump from new 5G technology will lead to total shipments of 2.16 billion units — devices that include PCs, mobile handsets, watches, and all sizes of computing devices in between — working out to a rise of 0.9% compared to 2019.
That’s a modest reversal after what was a rough year for hardware makers who battled with multiple headwinds that included — for mobile handsets — a general slowdown in renewal cycles and high saturation of device ownership in key markets; and — in PCs — the wider trend of people simply buying fewer of these bigger machines as their smartphones get smarter (and bigger).
As a point of comparison, last year Gartner revised its 2019 numbers at least three times, starting from “flat shipments” and ending at nearly four percent decline. In the end, 2019 saw shipments of 2.15 billion units — the lowest number since 2010. All of it is a bigger story of decline. In 2005, there were between 2.4 billion and 2.5 billion devices shipped globally.
“2020 will witness a slight market recovery,” writes Ranjit Atwal, research senior director at Gartner . “Increased availability of 5G handsets will boost mobile phone replacements, which will lead global device shipments to return to growth in 2020.”
(Shipments, we should note, do not directly equal sales, but they are used as a marker of how many devices are ordered in the channel for future sales. Shipments precede sales figures: overestimating results in oversupply and overall slowdown.)
The idea that 5G will drive more device sales, however, is still up for debate. Some have argued that while carriers are going hell for leather in their promotion of 5G, the idea of special 5G apps and services — versus using it to connect machines in an IoT play — that will spur adoption of those devices is not as apparent, and that’s leading to it being more of an abstract concept, and not one that is leading the charge when it comes to apps and services, especially for the mass consumer market and for (human) business users.
In 6 years of hearing pitches in Silicon Valley, I heard '5G' maybe once. That's not from ignorance – the utility network layer is not very important to innovation at the top of the stack.
— Benedict Evans (@benedictevans) January 20, 2020
Still, it may be that hardware might march on ahead regardless. Gartner predicts that 5G devices will account for 12% of all mobile phone shipments in 2020 as handset makers make their devices “5G ready,” with the proportion increasing to 43% by 2022. “From 2020, Gartner expects an increase in 5G phone adoption as prices decrease, 5G service coverage increases and users have better experiences with 5G phones,” writes Atwal. “The market will experience a further increase in 2023, when 5G handsets will account for over 50% of the mobile phones shipped.” That may in part be simply because handset makers are making their devices “5G ready”
Drilling down into the numbers, Gartner believes that worldwide, phones will see a bump of 1.7% this year, up to 1.78 billion before declining again in 2021 to 1.77 billion and then further in 2022 to 1.76 billion. Asia and in particular China and emerging markets will lead the charge.
Another analyst firm, Counterpoint, has been tracking marketshare for individual handset makers and notes that Samsung remains the world’s biggest handset maker going into Q4 2019 (final numbers on that quarter should be out in the coming weeks), with 21% of all shipments and slight increases over the year, but with the BBK group (which owns OPPO, Vivo, Realme, and OnePlus) likely to pass it, Huawei and Apple to become the world’s largest, as it’s growing much faster. Numbers overall were dragged down by declines for Apple, the world’s number-three handset maker, which saw a slump last year in its handset sales.
Although the market was generally lower across all devices, PC shipments actually saw some growth in 2019. That is set to turn down again this year, to 251 million units, and declining further to 247 million in 2021 and 242 million in 2022.
Part of that is due to slower migration trends — Windows 10 adoption was the primary driver for people switching up and buying new devices last year, but now that’s more or less finished. That will see slower purchasing among enterprise end users, although later adopters in the SME segment will finally make the change when support for Windows it 7 finally ends this month (it’s been on the cards for years at this point). In any case, the upgrade cycle is changing because of how Windows is evolving.
“The PC market’s future is unpredictable because there will not be a Windows 11. Instead, Windows 10 will be upgraded systematically through regular updates,” writes Atwal “As a result, peaks in PC hardware upgrade cycles driven by an entire Windows OS upgrade will end.”
Two trends that might impact shipments — or at least highlight other currents in the hardware market — should also be noted. The first is the role that Chromebooks might play in the PC market. These were one of the faster-growing categories last year, and this year we will see even more models rolled out, with what hardware makers hope will be even more of a boost in functionality to drive adoption. (Google and Intel’s collaboration is one example of how that will work: the two are working on a set of standards that will fit with chips made by Intel to produce what the companies believe are more efficient and compelling notebooks, with tablet-like touchscreens, better battery life, smaller and lighter form factors, and more.)
The second is whether or not smartwatches will make a significant dent into the overall device market. Q3 of last year saw growth of 42% to 14 million shipments globally. And while there have been a number of smartwatch hopefuls, but one of the biggest successes has been the Apple Watch, whose growth outstripped that of the wider watch market, at 51%. Indeed, looking at the results of the last several quarters, Apple’s product category that includes Watch sales (wearables, home and accessories) even appears to be on track to outstrip another hardware category, Macs. Whether that will continue, and potentially see others joining in, will be an interesting area to “watch.”
Too often the world of robotics seems to be a solution in search of a problem. Assistive robotics, on the other hand, are among one of the primary real-world tasks existing technology can seemingly address almost immediately.
The concept for the technology has been around for some time now and has caught on particularly well in places like Japan, where human help simply can’t keep up with the needs of an aging population. At TC Sessions: Robotics+AI at U.C. Berkeley on March 3, we’ll be speaking with a pair of founders developing offerings for precisely these needs.
Vivian Chu is the cofounder and CEO of Diligent Robotics. The company has developed the Moxi robot to help assist with chores and other non-patient tasks, in order to allow caregivers more time to interact with patients. Prior to Diligent, Chu worked at both Google[X] and Honda Research Institute.
Mike Dooley is the cofounder and CEO of Labrador Systems. The Los Angeles-based company recently closed a $2 million seed round to develop assistive robots for the home. Dooley has worked at a number of robotics companies including, most recently a stint as the VP of Product and Business Development at iRobot.
Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.
In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.
Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.
It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).
“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”
For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)
Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.
Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.
The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.
Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)
The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.
It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)
Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.
The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.
While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.
In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.
For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.
“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.
The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.
Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.
You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.
But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”.
And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.
What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.
Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)
At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.
But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.
Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.
And a ban would be far harder for platform giants to simply bend to their will.
So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.
— Jonathan Senchyne (@jsench) January 16, 2020
Google has inked a deal with India’s third-largest telecom operator as the American giant looks to grow its cloud customer base in the key overseas market that is increasingly emerging as a new cloud battleground for AWS and Microsoft .
Google Cloud announced on Monday that the new partnership, effective starting today, enables Airtel to offer G Suite to small and medium-sized businesses as part of the telco’s ICT portfolio.
Airtel, which has amassed over 325 million subscribers in India, said it currently serves 2,500 large businesses and over 500,000 small and medium-sized businesses and startups in the country. The companies did not share details of their financial arrangement.
In a statement, Thomas Kurian, chief executive of Google Cloud, said, “the combination of G Suite’s collaboration and productivity tools with Airtel’s digital business offerings will help accelerate digital innovations for thousands of Indian businesses.”
The move follows Reliance Jio, India’s largest telecom operator, striking a similar deal with Microsoft to sell cloud services to small businesses. The two announced a 10-year partnership to “serve millions of customers.”
AWS, which leads the cloud market, interestingly does not maintain any similar deals with a telecom operator — though it did in the past. Deals with carriers, which were very common a decade ago as tech giants looked to acquire new users in India, illustrates the phase of the cloud adoption in the nation.
Nearly half a billion people in India came online last decade. And slowly, small businesses and merchants are also beginning to use digital tools, storage services, and accept online payments. According to a report by lobby group Nasscom, India’s cloud market is estimated to be worth more than $7 billion in three years.
Like in many other markets, Amazon, Microsoft, and Google are locked in an intense battle to win cloud customers in India. All of them offer near identical features and are often willing to pay out a potential client’s remainder credit to the rival to convince them to switch, industry executives have told TechCrunch.
Welcome back to This Week in Apps, the Extra Crunch series that recaps the latest OS news, the applications they support and the money that flows through it all.
The app industry is as hot as ever with a record 204 billion downloads in 2019 and $120 billion in consumer spending in 2019, according to App Annie’s recently released “State of Mobile” annual report. People are now spending 3 hours and 40 minutes per day using apps, rivaling TV. Apps aren’t just a way to pass idle hours — they’re a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus.
In this Extra Crunch series, we help you to keep up with the latest news from the world of apps, delivered on a weekly basis.
This week, we dig into App Annie’s new “State of Mobile 2019” report and other app trends. We’re also seeing big gains for TikTok in 2019 and Disney+ in Q4. Both Apple and Google announced acquisitions this week that have implications for the mobile industry, as well.