FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

This Company Tapped AI for Its Website—and Landed in Court

By Tom Simonite
Under pressure to make their sites accessible to visually impaired users, firms turn to software. But advocates say the tech isn't always up to the task.

These Robots Follow You to Learn Where to Go

By Khari Johnson
Burro makes carts that help growers of trees and vineyards with harvests. Meanwhile, the maker of Vespa scooters wants to carry your groceries.

Satellites Can Spy a Menace in West Africa: Invasive Flowers

By Ramin Skibba
Spacecraft help researchers monitor environmental problems on Earth, like the overgrowth of nonnative species and deforestation.

A Devastating Twitch Hack Sends Streamers Reeling

By Cecilia D'Anastasio
The data breach apparently includes source code, gamer payouts, and more.

A True Story About Bogus Photos of People Making Fake News

By Tom Simonite
A photographer set out to capture the misinformation producers in a small town in Macedonia. He wound up revealing uncomfortable truths about his own profession.

Humans Can't Be the Sole Keepers of Scientific Knowledge

By Iulia Georgescu
Communicating scientific results in outdated formats is holding progress back. One alternative: Translate science for machines.

These Deepfake Voices Can Help Trans Gamers

By Tom Simonite
Players of online games can be harassed when their voices don't match their gender identity. New AI-fueled software may help.

The cocktail party problem: Why voice tech isn’t truly useful yet

By Ram Iyer
Ken Sutton Contributor
Ken Sutton is CEO and co-founder of Yobe, a software company that uses edge-based AI to unlock the potential of voice technologies for modern brands.

On average, men and women speak roughly 15,000 words per day. We call our friends and family, log into Zoom for meetings with our colleagues, discuss our days with our loved ones, or if you’re like me, you argue with the ref about a bad call they made in the playoffs.

Hospitality, travel, IoT and the auto industry are all on the cusp of leveling-up voice assistant adoption and the monetization of voice. The global voice and speech recognition market is expected to grow at a CAGR of 17.2% from 2019 to reach $26.8 billion by 2025, according to Meticulous Research. Companies like Amazon and Apple will accelerate this growth as they leverage ambient computing capabilities, which will continue to push voice interfaces forward as a primary interface.

As voice technologies become ubiquitous, companies are turning their focus to the value of the data latent in these new channels. Microsoft’s recent acquisition of Nuance is not just about achieving better NLP or voice assistant technology, it’s also about the trove of healthcare data that the conversational AI has collected.

Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.

Google has monetized every click of your mouse, and the same thing is now happening with voice. Advertisers have found that speak-through conversion rates are higher than click-through conversation rates. Brands need to begin developing voice strategies to reach customers — or risk being left behind.

Voice tech adoption was already on the rise, but with most of the world under lockdown protocol during the COVID-19 pandemic, adoption is set to skyrocket. Nearly 40% of internet users in the U.S. use smart speakers at least monthly in 2020, according to Insider Intelligence.

Yet, there are several fundamental technology barriers keeping us from reaching the full potential of the technology.

The steep climb to commercializing voice

By the end of 2020, worldwide shipments of wearable devices rose 27.2% to 153.5 million from a year earlier, but despite all the progress made in voice technologies and their integration in a plethora of end-user devices, they are still largely limited to simple tasks. That is finally starting to change as consumers demand more from these interactions, and voice becomes a more essential interface.

In 2018, in-car shoppers spent $230 billion to order food, coffee, groceries or items to pick up at a store. The auto industry is one of the earliest adopters of voice AI, but in order to really capture voice technology’s true potential, it needs to become a more seamless, truly hands-free experience. Ambient car noise still muddies the signal enough that it keeps users tethered to using their phones.

Private equity giveth, and private equity taketh away

By Natasha Mascarenhas

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines.

Natasha and Alex and Grace and Chris gathered to dig through the week’s biggest happenings, including some news of our own. As a note, Equity’s Monday episode will be landing next Tuesday, thanks to a national holiday here in the United States. And we have something special planned for Wednesday, so stay tuned.

Ok! Here’s the rundown from the show:

That’s a wrap from us for the week! Keep your head atop your shoulders and have a great weekend!

Equity drops every Monday at 7:00 a.m. PDT, Wednesday, and Friday morning at 7:00 a.m. PDT, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.

WhatsApp faces $267M fine for breaching Europe’s GDPR

By Natasha Lomas

It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.

The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.

Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.

A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.

The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.

Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).

In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.

In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.

In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.” 

It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.

The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.

So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.

…system to add years until this fine will actually be paid – but at least it's a start… 10k cases per year to go! 😜

— Max Schrems 🇪🇺 (@maxschrems) September 2, 2021

 

Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.

WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.

Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.

And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.

Is GDPR working?  

The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, who are also of course Internet companies.

Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here, in this WhatsApp case.

Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to the draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.

Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.

While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus being pushed through by the EDPB — is a sign that the process, while slow and creaky, is working.

Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (by those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU. And the associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.

But while it’s true that a $267M penalty is still the equivalent of a parking ticket for Facebook, orders to change how such adtech giants are able to process people’s information have the potential to be a far more significant correction on problematic business models. Again, though, time will be needed to tell.

In a statement on the WhatsApp decision today, noyb — the privacy advocay group founded by long-time European privacy campaigner Max Schrems, said: We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”

Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.

In further remarks, Schrems and noyb said: “WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

UK now expects compliance with children’s privacy design code

By Natasha Lomas

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.

 

SEC fines brokerage firms over email hacks that exposed client data

By Carly Page

The U.S. Securities and Exchange Commission has fined several brokerage firms a total of $750,000 for exposing the sensitive personally identifiable information of thousands of customers and clients after hackers took over employee email accounts.

A total of eight entities belonging to three companies have been sanctioned by the SEC, including Cetera (Advisor Networks, Investment Services, Financial Specialists, Advisors and Investment Advisers), Cambridge Investment Research (Investment Research and Investment Research Advisors) and KMS Financial Services.

In a press release, the SEC announced that it had sanctioned the firms for failures in their cybersecurity policies and procedures that allowed hackers to gain unauthorized access to cloud-based email accounts, exposing the personal information of thousands of customers and clients at each firm.

In the case of Cetera, the SEC said that cloud-based email accounts of more than 60 employees were infiltrated by unauthorized third parties for more than three years, exposing at least 4,388 clients’ personal information.

The order states that none of the accounts featured the protections required by Cetera’s policies, and the SEC also charged two of the Cetera entities with sending breach notifications to clients containing “misleading language suggesting that the notifications were issued much sooner than they actually were after discovery of the incidents.”

The SEC’s order against Cambridge concludes that the personal information exposure of at least 2,177 Cambridge customers and clients was the result of lax cybersecurity practices at the firm. 

“Although Cambridge discovered the first email account takeover in January 2018, it failed to adopt and implement firm-wide enhanced security measures for cloud-based email accounts of its representatives until 2021, resulting in the exposure and potential exposure of additional customer and client records and information,” the SEC said. 

The order against KMS is similar; the SEC’s order states that the data of almost 5,000 customers and clients were exposed as a result of the company’s failure to adopt written policies and procedures requiring additional firm-wide security measures until May 2020. 

“Investment advisers and broker-dealers must fulfill their obligations concerning the protection of customer information,” said Kristina Littman, chief of the SEC Enforcement Division’s Cyber Unit. “It is not enough to write a policy requiring enhanced security measures if those requirements are not implemented or are only partially implemented, especially in the face of known attacks.”

All of the parties agreed to resolve the charges and to not commit future violations of the charged provisions, without admitting or denying the SEC’s findings. As part of the settlements, Cetera will pay a penalty of $300,000, while Cambridge and KMS will pay fines of $250,000 and $200,000 respectively.  

Cambridge told TechCrunch that it does not comment on regulatory matters, but said it has and does maintain a comprehensive information security group and procedures to ensure clients’ accounts are fully protected. Cetera and KMS have yet to respond.

This latest action by the SEC comes just weeks after the Commission ordered London-based publishing and education giant Pearson to pay a $1 million fine for misleading investors about a 2018 data breach at the company.

Humane, a stealthy hardware and software startup co-founded by an ex-Apple designer and engineer, raises $100M

By Ingrid Lunden

A stealthy startup co-founded by a former senior designer from Apple and one of its ex-senior software engineers has picked up a significant round funding to build out its business. Humane, which has ambitions to build a new class of consumer devices and technologies that stem from “a genuine collaboration of design and engineering” that will represent “the next shift between humans and computing”, has raised $100 million.

This is a Series B, and it’s coming from some very high profile backers. Tiger Global Management is leading the round, with SoftBank Group, BOND, Forerunner Ventures and Qualcomm Ventures also participating. Other investors in this Series B include Sam Altman, Lachy Groom, Kindred Ventures, Marc Benioff’s TIME Ventures, Valia Ventures, NEXT VENTŪRES, Plexo Capital and the legal firm Wilson Sonsini Goodrich & Rosati.

Humane has been around actually since 2017, but it closed/filed its Series A only last year: $30 million in September 2020 at a $150 million valuation, according to PitchBook. Previous to that, it had raised just under $12 million, with many of the investors in this current round backing Humane in those earlier fundraises, too.

Valuation with this Series B is not being disclosed, the company confirmed to me.

Given that Humane has not yet released any products, nor has said much at all about what it has up its sleeve; and given that hardware in general presents a lot of unique challenges and therefore is often seen as a risky bet (that old “hardware is hard” chestnut), you might be wondering how Humane, still in stealth, has attracted these backers.

Some of that attention possibly stems from the fact that the two co-founders, husband-and-wife team Imran Chaudhri and Bethany Bongiorno, are something of icons in their own right. Bongiorno, who is Humane’s CEO, had been the software engineering director at Apple. Chaudhri, who is Humane’s chairman and president, is Apple’s former director of design, where he worked for 20 years on some of its most seminal products — the iPhone, the iPad and the Mac. Both have dozens of patents credited to them from their time there, and they have picked up a few since then, too.

Those latest patents — plus the very extensive list of job openings listed on Humane’s otherwise quite sparse site — might be the closest clues we have for what the pair and their startup might be building.

One patent is for a “Wearable multimedia device and cloud computing platform with laser projection system”; another is for a “System and apparatus for fertility and hormonal cycle awareness.”

Meanwhile, the company currently has nearly 50 job openings listed, including engineers with camera and computer vision experience, hardware engineers, designers, and security experts, among many others. (One sign of where all that funding will be going.) There is already an impressive team of about 60 people the company, which is another detail that attracted investors.

“The caliber of individuals working at Humane is incredibly impressive,” said Chase Coleman, Partner, Tiger Global, in a statement. “These are people who have built and shipped transformative products to billions of people around the world. What they are building is groundbreaking with the potential to become a standard for computing going forward.”

I’ve asked for more details on the company’s product roadmap and ethos behind the company, and who its customers might potentially be: other firms for whom it designs products, or end users directly?

For now, Bongiorno and Chaudhri seem to hint that part of what has motivated them to start this business was to reimagine what role technology might play in the next wave of innovation. It’s a question that many ask, but not many try to actually invest in finding the answer. For that alone, it’s worth watching Humane (if Humane lets us, that is: it’s still very much in stealth) to see what it does next.

“Humane is a place where people can truly innovate through a genuine collaboration of design and engineering,” the co-founders said in a joint statement. “We are an experience company that creates products for the benefit of people, crafting technology that puts people first — a more personal technology that goes beyond what we know today. We’re all waiting for something new, something that goes beyond the information age that we have all been living with. At Humane, we’re building the devices and the platform for what we call the intelligence age. We are committed to building a different type of company, founded on our values of trust, truth and joy. With the support of our partners, we will continue to scale the team with individuals who not only share our passion for revolutionizing the way we interact with computing, but also for how we build.”

Update: After publishing, I got a little more from Humane about its plans. Its aim is to build “technology that improves the human experience and is born of good intentions; products that put us back in touch with ourselves, each other, and the world around us; and experiences that are built on trust, with interactions that feel magical and bring joy.” It’s not a whole lot to go on, but more generally it’s an approach that seems to want to step away from the cycle we’re on today, and be more mindful and thoughtful. If they can execute on this, while still building rather than wholesale rejecting technology, they might be on to something.

Data scientists: don’t be afraid to explore new avenues

By Ram Iyer
Ilyes Kacher Contributor
Ilyes Kacher is a data scientist at autoRetouch, an AI-powered platform for bulk-editing product images online.

I’m a native French data scientist who cut his teeth as a research engineer in computer vision in Japan and later in my home country. Yet I’m writing from an unlikely computer vision hub: Stuttgart, Germany.

But I’m not working on German car technology, as one would expect. Instead, I found an incredible opportunity mid-pandemic in one of the most unexpected places: An ecommerce-focused, AI-driven, image-editing startup in Stuttgart focused on automating the digital imaging process across all retail products.

My experience in Japan taught me the difficulty of moving to a foreign country for work. In Japan, having a point of entry with a professional network can often be necessary. However, Europe has an advantage here thanks to its many accessible cities. Cities like Paris, London, and Berlin often offer diverse job opportunities while being known as hubs for some specialties.

While there has been an uptick in fully remote jobs thanks to the pandemic, extending the scope of your job search will provide more opportunities that match your interest.

Search for value in unlikely places, like retail

I’m working at the technology spin-off of a luxury retailer, applying my expertise to product images. Approaching it from a data scientist’s point of view, I immediately recognized the value of a novel application for a very large and established industry like retail.

Europe has some of the most storied retail brands in the world — especially for apparel and footwear. That rich experience provides an opportunity to work with billions of products and trillions of dollars in revenue that imaging technology can be applied to. The advantage of retail companies is a constant flow of images to process that provides a playing ground to generate revenue and possibly make an AI company profitable.

Another potential avenue to explore are independent divisions typically within an R&D department. I found a significant number of AI startups working on a segment that isn’t profitable, simply due to the cost of research and the resulting revenue from very niche clients.

Companies with data are companies with revenue potential

I was particularly attracted to this startup because of the potential access to data. Data by itself is quite expensive and a number of companies end up working with a finite set. Look for companies that directly engage at the B2B or B2C level, especially retail or digital platforms that affect front-end user interface.

Leveraging such customer engagement data benefits everyone. You can apply it towards further research and development on other solutions within the category, and your company can then work with other verticals on solving their pain points.

It also means there’s massive potential for revenue gains the more cross-segments of an audience the brand affects. My advice is to look for companies with data already stored in a manageable system for easy access. Such a system will be beneficial for research and development.

The challenge is that many companies haven’t yet introduced such a system, or they don’t have someone with the skills to properly utilize it. If you finding a company isn’t willing to share deep insights during the courtship process or they haven’t implemented it, look at the opportunity to introduce such data-focused offerings.

In Europe, the best bets involve creating automation processes

I have a sweet spot for early-stage companies that give you the opportunity to create processes and core systems. The company I work for was still in its early days when I started, and it was working towards creating scalable technology for a specific industry. The questions that the team was tasked with solving were already being solved, but there were numerous processes that still had to be put into place to solve a myriad of other issues.

Our year-long efforts to automate bulk image editing taught me that as long as the AI you’re building learns to run independently across multiple variables simultaneously (multiple images and workflows), you’re developing a technology that does what established brands haven’t been able to do. In Europe, there are very few companies doing this and they are hungry for talent who can.

So don’t be afraid of a little culture shock and take the leap.

❌