On average, men and women speak roughly 15,000 words per day. We call our friends and family, log into Zoom for meetings with our colleagues, discuss our days with our loved ones, or if you’re like me, you argue with the ref about a bad call they made in the playoffs.
Hospitality, travel, IoT and the auto industry are all on the cusp of leveling-up voice assistant adoption and the monetization of voice. The global voice and speech recognition market is expected to grow at a CAGR of 17.2% from 2019 to reach $26.8 billion by 2025, according to Meticulous Research. Companies like Amazon and Apple will accelerate this growth as they leverage ambient computing capabilities, which will continue to push voice interfaces forward as a primary interface.
As voice technologies become ubiquitous, companies are turning their focus to the value of the data latent in these new channels. Microsoft’s recent acquisition of Nuance is not just about achieving better NLP or voice assistant technology, it’s also about the trove of healthcare data that the conversational AI has collected.
Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.
Google has monetized every click of your mouse, and the same thing is now happening with voice. Advertisers have found that speak-through conversion rates are higher than click-through conversation rates. Brands need to begin developing voice strategies to reach customers — or risk being left behind.
Voice tech adoption was already on the rise, but with most of the world under lockdown protocol during the COVID-19 pandemic, adoption is set to skyrocket. Nearly 40% of internet users in the U.S. use smart speakers at least monthly in 2020, according to Insider Intelligence.
Yet, there are several fundamental technology barriers keeping us from reaching the full potential of the technology.
By the end of 2020, worldwide shipments of wearable devices rose 27.2% to 153.5 million from a year earlier, but despite all the progress made in voice technologies and their integration in a plethora of end-user devices, they are still largely limited to simple tasks. That is finally starting to change as consumers demand more from these interactions, and voice becomes a more essential interface.
In 2018, in-car shoppers spent $230 billion to order food, coffee, groceries or items to pick up at a store. The auto industry is one of the earliest adopters of voice AI, but in order to really capture voice technology’s true potential, it needs to become a more seamless, truly hands-free experience. Ambient car noise still muddies the signal enough that it keeps users tethered to using their phones.
Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines.
Natasha and Alex and Grace and Chris gathered to dig through the week’s biggest happenings, including some news of our own. As a note, Equity’s Monday episode will be landing next Tuesday, thanks to a national holiday here in the United States. And we have something special planned for Wednesday, so stay tuned.
Ok! Here’s the rundown from the show:
That’s a wrap from us for the week! Keep your head atop your shoulders and have a great weekend!
It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.
The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.
Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.
A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.
The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.
Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).
In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.
In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.
In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:
“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.”
It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.
The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.
So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.
…system to add years until this fine will actually be paid – but at least it's a start… 10k cases per year to go!
— Max Schrems (@maxschrems) September 2, 2021
Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.
WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.
Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.
And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.
The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, who are also of course Internet companies.
Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here, in this WhatsApp case.
Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to the draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.
Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.
While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus being pushed through by the EDPB — is a sign that the process, while slow and creaky, is working.
Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (by those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU. And the associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.
But while it’s true that a $267M penalty is still the equivalent of a parking ticket for Facebook, orders to change how such adtech giants are able to process people’s information have the potential to be a far more significant correction on problematic business models. Again, though, time will be needed to tell.
In a statement on the WhatsApp decision today, noyb — the privacy advocay group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”
Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.
In further remarks, Schrems and noyb said: “WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”
In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.
The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.
But from today it expects the standards of the code to be met.
Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.
Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).
The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.
Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.
The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.
The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.
The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.
“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”
It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”
Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”
“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”
“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.
The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.
The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.
In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.
In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.
A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.
Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.
The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.
And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.
The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).
In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”
“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.
And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).
The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.
Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.
Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.
An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.
But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned).
The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.”
At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.
For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.
That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.
So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.
The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.
The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.
Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.
Complying with the ICO’s design standards may therefore actually be the easy bit.
The U.S. Securities and Exchange Commission has fined several brokerage firms a total of $750,000 for exposing the sensitive personally identifiable information of thousands of customers and clients after hackers took over employee email accounts.
A total of eight entities belonging to three companies have been sanctioned by the SEC, including Cetera (Advisor Networks, Investment Services, Financial Specialists, Advisors and Investment Advisers), Cambridge Investment Research (Investment Research and Investment Research Advisors) and KMS Financial Services.
In a press release, the SEC announced that it had sanctioned the firms for failures in their cybersecurity policies and procedures that allowed hackers to gain unauthorized access to cloud-based email accounts, exposing the personal information of thousands of customers and clients at each firm.
In the case of Cetera, the SEC said that cloud-based email accounts of more than 60 employees were infiltrated by unauthorized third parties for more than three years, exposing at least 4,388 clients’ personal information.
The order states that none of the accounts featured the protections required by Cetera’s policies, and the SEC also charged two of the Cetera entities with sending breach notifications to clients containing “misleading language suggesting that the notifications were issued much sooner than they actually were after discovery of the incidents.”
The SEC’s order against Cambridge concludes that the personal information exposure of at least 2,177 Cambridge customers and clients was the result of lax cybersecurity practices at the firm.
“Although Cambridge discovered the first email account takeover in January 2018, it failed to adopt and implement firm-wide enhanced security measures for cloud-based email accounts of its representatives until 2021, resulting in the exposure and potential exposure of additional customer and client records and information,” the SEC said.
The order against KMS is similar; the SEC’s order states that the data of almost 5,000 customers and clients were exposed as a result of the company’s failure to adopt written policies and procedures requiring additional firm-wide security measures until May 2020.
“Investment advisers and broker-dealers must fulfill their obligations concerning the protection of customer information,” said Kristina Littman, chief of the SEC Enforcement Division’s Cyber Unit. “It is not enough to write a policy requiring enhanced security measures if those requirements are not implemented or are only partially implemented, especially in the face of known attacks.”
All of the parties agreed to resolve the charges and to not commit future violations of the charged provisions, without admitting or denying the SEC’s findings. As part of the settlements, Cetera will pay a penalty of $300,000, while Cambridge and KMS will pay fines of $250,000 and $200,000 respectively.
Cambridge told TechCrunch that it does not comment on regulatory matters, but said it has and does maintain a comprehensive information security group and procedures to ensure clients’ accounts are fully protected. Cetera and KMS have yet to respond.
This latest action by the SEC comes just weeks after the Commission ordered London-based publishing and education giant Pearson to pay a $1 million fine for misleading investors about a 2018 data breach at the company.
A stealthy startup co-founded by a former senior designer from Apple and one of its ex-senior software engineers has picked up a significant round funding to build out its business. Humane, which has ambitions to build a new class of consumer devices and technologies that stem from “a genuine collaboration of design and engineering” that will represent “the next shift between humans and computing”, has raised $100 million.
This is a Series B, and it’s coming from some very high profile backers. Tiger Global Management is leading the round, with SoftBank Group, BOND, Forerunner Ventures and Qualcomm Ventures also participating. Other investors in this Series B include Sam Altman, Lachy Groom, Kindred Ventures, Marc Benioff’s TIME Ventures, Valia Ventures, NEXT VENTŪRES, Plexo Capital and the legal firm Wilson Sonsini Goodrich & Rosati.
Humane has been around actually since 2017, but it closed/filed its Series A only last year: $30 million in September 2020 at a $150 million valuation, according to PitchBook. Previous to that, it had raised just under $12 million, with many of the investors in this current round backing Humane in those earlier fundraises, too.
Valuation with this Series B is not being disclosed, the company confirmed to me.
Given that Humane has not yet released any products, nor has said much at all about what it has up its sleeve; and given that hardware in general presents a lot of unique challenges and therefore is often seen as a risky bet (that old “hardware is hard” chestnut), you might be wondering how Humane, still in stealth, has attracted these backers.
Some of that attention possibly stems from the fact that the two co-founders, husband-and-wife team Imran Chaudhri and Bethany Bongiorno, are something of icons in their own right. Bongiorno, who is Humane’s CEO, had been the software engineering director at Apple. Chaudhri, who is Humane’s chairman and president, is Apple’s former director of design, where he worked for 20 years on some of its most seminal products — the iPhone, the iPad and the Mac. Both have dozens of patents credited to them from their time there, and they have picked up a few since then, too.
Those latest patents — plus the very extensive list of job openings listed on Humane’s otherwise quite sparse site — might be the closest clues we have for what the pair and their startup might be building.
One patent is for a “Wearable multimedia device and cloud computing platform with laser projection system”; another is for a “System and apparatus for fertility and hormonal cycle awareness.”
Meanwhile, the company currently has nearly 50 job openings listed, including engineers with camera and computer vision experience, hardware engineers, designers, and security experts, among many others. (One sign of where all that funding will be going.) There is already an impressive team of about 60 people the company, which is another detail that attracted investors.
“The caliber of individuals working at Humane is incredibly impressive,” said Chase Coleman, Partner, Tiger Global, in a statement. “These are people who have built and shipped transformative products to billions of people around the world. What they are building is groundbreaking with the potential to become a standard for computing going forward.”
I’ve asked for more details on the company’s product roadmap and ethos behind the company, and who its customers might potentially be: other firms for whom it designs products, or end users directly?
For now, Bongiorno and Chaudhri seem to hint that part of what has motivated them to start this business was to reimagine what role technology might play in the next wave of innovation. It’s a question that many ask, but not many try to actually invest in finding the answer. For that alone, it’s worth watching Humane (if Humane lets us, that is: it’s still very much in stealth) to see what it does next.
“Humane is a place where people can truly innovate through a genuine collaboration of design and engineering,” the co-founders said in a joint statement. “We are an experience company that creates products for the benefit of people, crafting technology that puts people first — a more personal technology that goes beyond what we know today. We’re all waiting for something new, something that goes beyond the information age that we have all been living with. At Humane, we’re building the devices and the platform for what we call the intelligence age. We are committed to building a different type of company, founded on our values of trust, truth and joy. With the support of our partners, we will continue to scale the team with individuals who not only share our passion for revolutionizing the way we interact with computing, but also for how we build.”
Update: After publishing, I got a little more from Humane about its plans. Its aim is to build “technology that improves the human experience and is born of good intentions; products that put us back in touch with ourselves, each other, and the world around us; and experiences that are built on trust, with interactions that feel magical and bring joy.” It’s not a whole lot to go on, but more generally it’s an approach that seems to want to step away from the cycle we’re on today, and be more mindful and thoughtful. If they can execute on this, while still building rather than wholesale rejecting technology, they might be on to something.
I’m a native French data scientist who cut his teeth as a research engineer in computer vision in Japan and later in my home country. Yet I’m writing from an unlikely computer vision hub: Stuttgart, Germany.
But I’m not working on German car technology, as one would expect. Instead, I found an incredible opportunity mid-pandemic in one of the most unexpected places: An ecommerce-focused, AI-driven, image-editing startup in Stuttgart focused on automating the digital imaging process across all retail products.
My experience in Japan taught me the difficulty of moving to a foreign country for work. In Japan, having a point of entry with a professional network can often be necessary. However, Europe has an advantage here thanks to its many accessible cities. Cities like Paris, London, and Berlin often offer diverse job opportunities while being known as hubs for some specialties.
While there has been an uptick in fully remote jobs thanks to the pandemic, extending the scope of your job search will provide more opportunities that match your interest.
I’m working at the technology spin-off of a luxury retailer, applying my expertise to product images. Approaching it from a data scientist’s point of view, I immediately recognized the value of a novel application for a very large and established industry like retail.
Europe has some of the most storied retail brands in the world — especially for apparel and footwear. That rich experience provides an opportunity to work with billions of products and trillions of dollars in revenue that imaging technology can be applied to. The advantage of retail companies is a constant flow of images to process that provides a playing ground to generate revenue and possibly make an AI company profitable.
Another potential avenue to explore are independent divisions typically within an R&D department. I found a significant number of AI startups working on a segment that isn’t profitable, simply due to the cost of research and the resulting revenue from very niche clients.
I was particularly attracted to this startup because of the potential access to data. Data by itself is quite expensive and a number of companies end up working with a finite set. Look for companies that directly engage at the B2B or B2C level, especially retail or digital platforms that affect front-end user interface.
Leveraging such customer engagement data benefits everyone. You can apply it towards further research and development on other solutions within the category, and your company can then work with other verticals on solving their pain points.
It also means there’s massive potential for revenue gains the more cross-segments of an audience the brand affects. My advice is to look for companies with data already stored in a manageable system for easy access. Such a system will be beneficial for research and development.
The challenge is that many companies haven’t yet introduced such a system, or they don’t have someone with the skills to properly utilize it. If you finding a company isn’t willing to share deep insights during the courtship process or they haven’t implemented it, look at the opportunity to introduce such data-focused offerings.
I have a sweet spot for early-stage companies that give you the opportunity to create processes and core systems. The company I work for was still in its early days when I started, and it was working towards creating scalable technology for a specific industry. The questions that the team was tasked with solving were already being solved, but there were numerous processes that still had to be put into place to solve a myriad of other issues.
Our year-long efforts to automate bulk image editing taught me that as long as the AI you’re building learns to run independently across multiple variables simultaneously (multiple images and workflows), you’re developing a technology that does what established brands haven’t been able to do. In Europe, there are very few companies doing this and they are hungry for talent who can.
So don’t be afraid of a little culture shock and take the leap.
Cloud security startup Monad, which offers a platform for extracting and connecting data from various security tools, has launched from stealth with $17 million in Series A funding led by Index Ventures.
Monad was founded on the belief that enterprise cybersecurity is a growing data management challenge, as organizations try to understand and interpret the masses of information that’s siloed within disconnected logs and databases. Once an organization has extracted data from their security tools, Monad’s Security Data Platform enables them to centralize that data within a data warehouse of choice, and normalize and enrich the data so that security teams have the insights they need to secure their systems and data effectively.
“Security is fundamentally a big data problem,” said Christian Almenar, CEO and co-founder of Monad. “Customers are often unable to access their security data in the streamlined manner that DevOps and cloud engineering teams need to build their apps quickly while also addressing their most pressing security and compliance challenges. We founded Monad to solve this security data challenge and liberate customers’ security data from siloed tools to make it accessible via any data warehouse of choice.”
The startup’s Series A funding round, which was also backed by Sequoia Capital, brings its total amount of investment raised to $19 million and comes 12 months after its Sequoia-led seed round. The funds will enable Monad to scale its development efforts for its security data cloud platform, the startup said.
Monad was founded in May 2020 by security veterans Christian Almenar and Jacolon Walker. Almenar previously co-founded serverless security startup Intrinsic which was acquired by VMware in 2019, while Walker served as CISO and security engineer at OpenDoor, Collective Health, and Palantir.
The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.
Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.
He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.
An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.
— John Edwards (@JCE_PC) August 26, 2021
If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.
Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.
But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.
For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.
Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giants — should be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.
A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.
The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.
Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.
Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.
Oliver Dowden, the UK Minister for Digital, Culture, Media and Sport, says that the UK will break away from GDPR, and will no longer require cookie warnings, other than those posing a 'high risk'.https://t.co/2ucnppHrIm pic.twitter.com/RRUdpJumYa
— dan barker (@danbarker) August 25, 2021
“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.
The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.
If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.
It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.
We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.
Data protection experts are already warning of a regulatory stooge.
While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.
In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.
All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.
In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”
The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.
You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the privacy sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…
UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.
Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.
The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.
This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK is precariously placed — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR.
So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy.
Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”. Moreover, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years.
So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.
The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.
Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.
Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.
“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.
As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).
So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.
Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on people’s data.
Marketing automation has usually focused on driving sales, mainly using past purchase or late funnel behavior (e.g., paid search) as a predictor of an imminent purchase. While effective at boosting sales numbers, this widely implemented strategy can result in a disservice to brands and industries that adopt it, as it promotes the perpetual devaluation of goods or services. Narrowing a brand’s focus only to aspects linked to conversions risks stripping the customer experience of key components that lay the groundwork for long-term success.
We live in a world rich with data, and insights are growing more vibrant every day. With this in mind, companies and advertisers can strategically weave together all the data they collect during the customer experience. This enables them to understand every inference available during customer interactions and learn what benefits the customer most at a given time.
But focusing exclusively on data collected from customers, brands risk falling subject to the law of diminishing returns. Even companies with meaningful consumer interactions or rich service offerings struggle to gain impactful contextual insights. Only by harnessing a broader dataset can we understand how people become customers in the first place, what makes them more or less likely to purchase again and how developments in society impact the growth or struggle a brand will experience.
Here’s a look at how we can achieve a more complete picture of current and future customers.
A critical component in re-imagining customer experience as a relationship is recognizing that brands often don’t focus enough on consumers’ wider needs and concerns.
Over the past several years, almost every industry has capitalized on the opportunity data-driven marketing presents, inching closer to the “holy grail” of real-time, direct and personalized engagements. Yet, the evolving toolset encouraged brands to focus on end-of-the-funnel initiatives, jeopardizing what really impacts a business’ longevity: relationships.
While past purchase or late-funnel behavior data does provide value and is useful in identifying habit changes or actual needs, it is relatively surface level and doesn’t offer insight into consumers’ future behavior or what led them to a specific purchase in the first place.
By incorporating AI, brands can successfully engage with their audiences in a more holistic, helpful and genuine way. Technologies to discern not just the content of language (e.g., the keywords) but its meaning as well, open up possibilities to better infer consumer interest and intentions. In turn, brands can tune consumer interactions to generate satisfaction and delight, and ultimately accrue stronger insights for future use.
At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.
Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.
The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.
Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.
“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”
Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.
Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.
Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.
Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.
Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.
Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”
“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.
Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.
He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.
“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”