With an increasing number of enterprise systems, growing teams, a rising proliferation of the web and multiple digital initiatives, companies of all sizes are creating loads of data every day. This data contains excellent business insights and immense opportunities, but it has become impossible for companies to derive actionable insights from this data consistently due to its sheer volume.
According to Verified Market Research, the analytics-as-a-service (AaaS) market is expected to grow to $101.29 billion by 2026. Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights. Through AaaS, managed services providers (MSPs) can help organizations get started on their analytics journey immediately without extravagant capital investment.
MSPs can take ownership of the company’s immediate data analytics needs, resolve ongoing challenges and integrate new data sources to manage dashboard visualizations, reporting and predictive modeling — enabling companies to make data-driven decisions every day.
AaaS could come bundled with multiple business-intelligence-related services. Primarily, the service includes (1) services for data warehouses; (2) services for visualizations and reports; and (3) services for predictive analytics, artificial intelligence (AI) and machine learning (ML). When a company partners with an MSP for analytics as a service, organizations are able to tap into business intelligence easily, instantly and at a lower cost of ownership than doing it in-house. This empowers the enterprise to focus on delivering better customer experiences, be unencumbered with decision-making and build data-driven strategies.
Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights.
In today’s world, where customers value experiences over transactions, AaaS helps businesses dig deeper into their psyche and tap insights to build long-term winning strategies. It also enables enterprises to forecast and predict business trends by looking at their data and allows employees at every level to make informed decisions.
In the early 2000s, Jeff Bezos gave a seminal TED Talk titled “The Electricity Metaphor for the Web’s Future.” In it, he argued that the internet will enable innovation on the same scale that electricity did.
We are at a similar inflection point in healthcare, with the recent movement toward data transparency birthing a new generation of innovation and startups.
Those who follow the space closely may have noticed that there are twin struggles taking place: a push for more transparency on provider and payer data, including anonymous patient data, and another for strict privacy protection for personal patient data. What’s the main difference?
This sector is still somewhat nascent — we are in the first wave of innovation, with much more to come.
Anonymized data is much more freely available, while personal data is being locked even tighter (as it should be) due to regulations like GDPR, CCPA and their equivalents around the world.
The former trend is enabling a host of new vendors and services that will ultimately make healthcare better and more transparent for all of us.
These new companies could not have existed five years ago. The Affordable Care Act was the first step toward making anonymized data more available. It required healthcare institutions (such as hospitals and healthcare systems) to publish data on costs and outcomes. This included the release of detailed data on providers.
Later legislation required biotech and pharma companies to disclose monies paid to research partners. And every physician in the U.S. is now required to be in the National Practitioner Identifier (NPI), a comprehensive public database of providers.
All of this allowed the creation of new types of companies that give both patients and providers more control over their data. Here are some key examples of how.
This is a key capability of patients’ newly found access to health data. Think of how often, as a patient, providers aren’t aware of treatment or a test you’ve had elsewhere. Often you end up repeating a test because a provider doesn’t have a record of a test conducted elsewhere.
Why can we see all our bank, credit card and brokerage data on our phones instantaneously in one app, yet walk into a doctor’s office blind to our healthcare records, diagnoses and prescriptions? Our health status should be as accessible as our checking account balance.
The liberation of financial data enabled by startups like Plaid is beginning to happen with healthcare data, which will have an even more profound impact on society; it will save and extend lives. This accessibility is quickly approaching.
As early investors in Quovo and PatientPing, two pioneering companies in financial and healthcare data, respectively, it’s evident to us the winners of the healthcare data transformation will look different than they did with financial data, even as we head toward a similar end state.
For over a decade, government agencies and consumers have pushed for this liberation.
In 2009, the Health Information Technology for Economic and Clinical Health Act (HITECH) gave the first big industry push, catalyzing a wave of digitization through electronic health records (EHR). Today, over 98% of medical records are digitized. This market is dominated by multi‐billion‐dollar vendors like Epic, Cerner and Allscripts, which control 70% of patient records. However, these giant vendors have yet to make these records easily accessible.
A second wave of regulation has begun to address the problem of trapped data to make EHRs more interoperable and valuable. Agencies within the Department of Health and Human Services have mandated data sharing among payers and providers using a common standard, the Fast Healthcare Interoperability Resources (FHIR) protocol.
Image Credits: F-Prime Capital
This push for greater data liquidity coincides with demand from consumers for better information about cost and quality. Employers have been steadily shifting a greater share of healthcare expenses to consumers through high-deductible health plans – from 30% in 2012 to 51% in 2018. As consumers pay for more of the costs, they care more about the value of different health options, yet are unable to make those decisions without real-time access to cost and clinical data.
Image Credits: F-Prime Capital
Tech startups have an opportunity to ease the transmission of healthcare data and address the push of regulation and consumer demands. The lessons from fintech make it tempting to assume that a Plaid for healthcare data would be enough to address all of the challenges within healthcare, but it is not the right model. Plaid’s aggregator model benefited from a relatively high concentration of banks, a limited number of data types and low barriers to data access.
By contrast, healthcare data is scattered across tens of thousands of healthcare providers, stored in multiple data formats and systems per provider, and is rarely accessed by patients directly. Many people log into their bank apps frequently, but few log into their healthcare provider portals, if they even know one exists.
HIPPA regulations and strict patient consent requirements also meaningfully increase friction to data access and sharing. Financial data serves mostly one-to-one use cases, while healthcare data is a many-to-many problem. A single patient’s data is spread across many doctors and facilities and is needed by just as many for care coordination.
Because of this landscape, winning healthcare technology companies will need to build around four propositions:
Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.
Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.
In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.
But news of the breach only became public after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.
Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.
In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”
“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.
Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.
It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.
Click Studios chief executive Mark Sandford has not responded to repeated requests (from TechCrunch) for comment. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”
TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.
Growing up, did you ever wonder how many chairs you’d have to stack to reach the sky?
No? I guess that’s just me then.
As a child, I always asked a lot of “how many/much” questions. Some were legitimate (“How much is 1 USD in VND?”); some were absurd (“How tall is the sky and can it be measured in chairs?”). So far, I’ve managed to maintain my obnoxious statistical probing habit without making any mortal enemies in my 20s. As it turns out, that habit comes with its perks when working in product.
Growing up, did you ever wonder how many chairs you’d have to stack to reach the sky?
My first job as a product designer was at a small but energetic fintech startup whose engineers also dabbled in pulling data. I constantly bothered them with questions like, “How many exports did we have from that last feature launched?” and “How many admins created at least one rule on this page?” I was curious about quantitative analysis but did not know where to start.
I knew I wasn’t the only one. Even then, there was a growing need for basic data literacy in the tech industry, and it’s only getting more taxing by the year. Words like “data-driven,” “data-informed” and “data-powered” increasingly litter every tech organization’s product briefs. But where does this data come from? Who has access to it? How might I start digging into it myself? How might I leverage this data in my day-to-day design once I get my hands on it?
“Curiosity is our compass” is one of Kickstarter’s guiding principles. Powered by a desire for knowledge and information, curiosity is the enemy of many larger, older and more structured organizations — whether they admit it or not — because it hinders the production flow. Curiosity makes you pause and take time to explore and validate the “ask.” Asking as many what’s, how’s, why’s, who’s and how many’s as possible is important to help you learn if the work is worth your time.
Those of us who read a lot of tech and business publications have heard for years about the cybersecurity skills gap. Studies often claim that millions of jobs are going unfilled because there aren’t enough qualified candidates available for hire.
I don’t buy it.
The basic laws of supply and demand mean there will always be people in the workforce willing to move into well-paid security jobs. The problem is not that these folks don’t exist. It’s that CIOs or CISOs typically look right past them if their resumes don’t have a very specific list of qualifications.
In many cases, hiring managers expect applicants to be fully trained on all the technologies their organization currently uses. That not only makes it harder to find qualified candidates, but it also reduces the diversity of experience within security teams — which, ultimately, may weaken the company’s security capabilities and its talent pool.
At Netskope, we take a different approach to staffing for security roles. We know we can teach the cybersecurity skills needed to do the job, so instead, there are two traits we consider more important than specific technical expertise: One is a hunger to learn more about security, which suggests the individual will take the initiative to continuously improve their skills. The other is possession of a skill set that no one else on our security team has.
To understand why I believe our approach has helped us build a stronger security team, think about the long-term benefits of hiring someone with a specific security skill set: How valuable will that exact knowledge be in several years? Probably not very.
The problem is not that these folks don’t exist. It’s that CIOs or CISOs typically look right past them if their resumes don’t have a very specific list of qualifications.
Even the most basic security technologies are incredibly dynamic. In most companies, the IT infrastructure is currently in the midst of a massive transition from on-premises to cloud-based systems. Security teams are having to learn new technologies. More than that, they are having to adopt an entirely new mindset, shifting from a focus on protecting specific pieces of hardware to a focus on protecting individuals and applications as their workloads increasingly move outside the corporate network.
China’s plan to introduce its digital currency is getting a lot of help from its tech conglomerates. JD.com, a major Chinese online retailer that competes with Alibaba, said Monday that it has started paying some staff in digital yuan (since January), the virtual version of the country’s physical currency.
China has been busy experimenting with digital currency over the past few months. In October, Shenzhen, a southern city known for its progressive economic policies, doled out 10 million yuan worth of digital currency to 500,000 residents, who could then use the money to shop at certain online and offline retailers.
Several other large Chinese cities have followed Shenzhen’s suit. The residents in these regions must apply through selected banks to start receiving and paying by digital yuan.
The electronic yuan initiative is a collective effort involving China’s regulators, commercial banks and technology solution providers. At first glance, the scheme still mimics how physical yuan is circulating at the moment; under the direction of the central bank, the six major commercial banks in China, including ICBC, distribute the digital yuan to smaller banks and a web of tech solution providers, which could help bring more use cases to the new electronic money.
For example, JD.com partnered with the Industrial and Commercial Bank of China (ICBC) to deposit the digital income. The online retailer has become one of the first organizations in China to pay wages in electronic yuan; in August, some government workers in the eastern city of Suzhou also began getting paid in the digital money.
Across the board, China’s major tech companies have actively participated in the buildout of the digital yuan ecosystem, which will help the central government better track money flows.
Aside from JD.com, video streaming platform Bilibili, on-demand services provider Meituan and ride-hailing app Didi have also begun accepting digital yuan for user purchases. Gaming and social networking giant Tencent became one of the “digital yuan operators” and will take part in the design, R&D and operational work of the electronic money. Jack Ma’s Ant Group, which is undergoing a major overhaul following a stalled IPO, has also joined hands with the central bank to work on building out the infrastructure to move money digitally. Huawei, the telecom equipment titan debuted a wallet on one of its smartphone models that allows users to spend digital yuan instantaneously even if the device is offline.
Updated the article to clarify the timeline of the digital salary rollout.
Click Studios, the Australian software house that develops the enterprise password manager Passwordstate, has warned customers to reset passwords across their organizations after a cyberattack on the password manager.
An email sent by Click Studios to customers said the company had confirmed that attackers had “compromised” the password manager’s software update feature in order to steal customer passwords.
The email, posted on Twitter by Polish news site Niebezpiecznik early on Friday, said the malicious update exposed Passwordstate customers over a 28-hour window between April 20-22. Once installed, the malicious update contacts the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers. The email also told customers to “commence resetting all passwords contained within Passwordstate.”
Manager haseł PasswordState został zhackowany a komputery klientów zainfekowane.
Producent informuje ofiary e-mailem.
Ten manager haseł jest "korporacyjny", więc problem będzie dotyczyć przede wszystkim firm… Auć!
(Informacja od Tajemniczego Pedro) pic.twitter.com/PGHhmEKpje
— Niebezpiecznik (@niebezpiecznik) April 23, 2021
Click Studios did not say how the attackers compromised the password manager’s update feature, but emailed customers with a security fix.
The company also said the attacker’s servers were taken down on April 22. But Passwordstate users could still be at risk if the attacker’s are able to get their infrastructure online again.
Enterprise password managers let employees at companies share passwords and other sensitive secrets across their organization, such as network devices — including firewalls and VPNs, shared email accounts, internal databases, and social media accounts. Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.
Although affected customers were notified this morning, news of the breach only became widely known several hours later after Danish cybersecurity firm CSIS Group published a blog post with details of the attack.
Click Studios chief executive Mark Sanford did not respond to a request for comment outside Australian business hours.
Electronic health records (EHR) have long held promise as a means of unlocking new superpowers for caregiving and patients in the medical industry, but while they’ve been a thing for a long time, actually accessing and using them hasn’t been as quick to become a reality. That’s where Medchart comes in, providing access to health information between businesses, complete with informed patient consent, for using said data at scale. The startup just raised $17 million across Series A and seed rounds, led by Crosslink Capital and Golden Ventures, and including funding from Stanford Law School, rapper Nas and others.
Medchart originally started out as more of a DTC play for healthcare data, providing access and portability to digital health information directly to patients. It sprung from the personal experience of co-founders James Bateman and Derrick Chow, who both faced personal challenges accessing and transferring health record information for relatives and loved ones during crucial healthcare crisis moments. Bateman, Medchart’s CEO, explained that their experience early on revealed that what was actually needed for the model to scale and work effectively was more of a B2B approach, with informed patient consent as the crucial component.
“We’re really focused on that patient consent and authorization component of letting you allow your data to be used and shared for various purposes,” Bateman said in an interview. “And then building that platform that lets you take that data and then put it to use for those businesses and services, that we’re classifying as ‘beyond care.’ Whether those are our core areas, which would be with your, your lawyer, or with an insurance provider, or clinical researcher — or beyond that, looking at a future vision of this really being a platform to power innovation, and all sorts of different apps and services that you could imagine that are typically outside that realm of direct care and treatment.”
Bateman explained that one of the main challenges in making patient health data actually work for these businesses that surround, but aren’t necessarily a core part of a care paradigm, is delivering data in a way that it’s actually useful to the receiving party. Traditionally, this has required a lot of painstaking manual work, like paralegals poring over paper documents to find information that isn’t necessarily consistently formatted or located.
“One of the things that we’ve been really focused on is understanding those business processes,” Bateman said. “That way, when we work with these businesses that are using this data — all permissioned by the patient — that we’re delivering what we call ‘the information,’ and not just the data. So what are the business decision points that you’re trying to make with this data?”
To accomplish this, Medchart makes use of AI and machine learning to create a deeper understanding of the data set in order to be able to intelligently answer the specific questions that data requesters have of the information. Therein lies their longterm value, since once that understanding is established, they can query the data much more easily to answer different questions depending on different business needs, without needing to re-parse the data every single time.
“Where we’re building these systems of intelligence on top of aggregate data, they are fully transferable to making decisions around policies for, for example, life insurance underwriting, or with pharmaceutical companies on real world evidence for their phase three, phase four clinical trials, and helping those teams to understand, you know, the the overall indicators and the preexisting conditions and what the outcomes are of the drugs under development or whatever they’re measuring in their study,” Bateman said.”
According to Ameet Shah, Partner at co-lead investor for the Series A Golden Ventures, this is the key ingredient in what Medchart is offering that makes the company’s offering so attractive in terms of long-term potential.
“What you want is you both depth and breadth, and you need predictability — you need to know that you’re actually getting like the full data set back,” Shah said in an interview. “There’s all these point solutions, depending on the type of clinic you’re looking at, and the type of record you’re accessing, and that’s not helpful to the requester. Right now, you’re putting the burden on them, and when we looked at it, we were just like ‘Oh, this is just a whole bunch of undifferentiated heavy lifting that the entire health tech ecosystem is trying to like solve for. So if [Medchart] can just commoditize that and drive the cost down as low as possible, you can unlock all these other new use cases that never could have been done before.”
One recent development that positions Medchart to facilitate even more novel use cases of patient data is the 21st Century Cures Act, which just went into effect on April 5, provides patients with immediate access, without charge, to all the health information in their electronic medical records. That sets up a huge potential opportunity in terms of portability, with informed consent, of patient data, and Bateman suggests it will greatly speed up innovation built upon the type of information access Medchart enables.
“I think there’s just going to be an absolute explosion in this space over the next two to three years,” Bateman said. “And at Medchart, we’ve already built all the infrastructure with connections to these large information systems. We’re already plugged in and providing the data and the value to the end users and the customers, and I think now you’re going to see this acceleration and adoption and growth in this area that we’re super well-positioned to be able to deliver on.”
When we last heard from BigID at the end of 2020, the company was announcing a $70 million Series D at a $1 billion valuation. Today, it announced a $30 million extension on that deal valuing the company at $1.25 billion just 4 months later.
This chunk of money comes from private equity firm Advent International, and brings the total raised to over $200 million across 4 rounds, according to the company. The late stage startup is attracting all of this capital by building a security and privacy platform. When I spoke to CEO Dimitri Sirota in September 2019 at the time of the $50 million Series C, he described the company’s direction this way:
“We’ve separated the product into some constituent parts. While it’s still sold as a broad-based [privacy and security] solution, it’s much more of a platform now in the sense that there’s a core set of capabilities that we heard over and over that customers want.”
Sirota says he has been putting the money to work, and as the economy improves he is seeing more traction for the product set. “Since December, we’ve added employees as we’ve seen broader economic recovery and increased demand. In tandem, we have been busy building a whole host of new products and offerings that we will announce over the coming weeks that will be transformational for BigID,” he said.
He also said that as with previous rounds, he didn’t go looking for the additional money, but decided to take advantage of the new funds at a higher valuation with a firm that he believes can add value overall. What’s more, the funds should allow the company to expand in ways it might have held off on.
“It was important to us that this wouldn’t be a distraction and that we could balance any funding without the need to over-capitalize, which is becoming a bigger issue in today’s environment. In the end, we took what we thought could bring forward some additional product modules and add a sales team focused on smaller commercial accounts,” Sirota said.
Ashwin Krishnan, a principal on Advent’s technology team in New York says that BigID was clearly aligned with two trends his firm has been following. That includes the explosion of data being collected and the increasing focus on managing and securing that data with the goal of ultimately using it to make better decisions.
“When we met with Dimitri and the BigID team, we immediately knew we had found a company with a powerful platform that solves the most challenging problem at the center of these trends and the data question,”Krishnan said.
Past investors in the company include Boldstart Ventures, Bessemer Venture Partners and Tiger Global. Strategic investors include Comcast Ventures, Salesforce Ventures and SAP.io.
By 2025, 463 exabytes of data will be created each day, according to some estimates. (For perspective, one exabyte of storage could hold 50,000 years of DVD-quality video.) It’s now easier than ever to translate physical and digital actions into data, and businesses of all types have raced to amass as much data as possible in order to gain a competitive edge.
However, in our collective infatuation with data (and obtaining more of it), what’s often overlooked is the role that storytelling plays in extracting real value from data.
The reality is that data by itself is insufficient to really influence human behavior. Whether the goal is to improve a business’ bottom line or convince people to stay home amid a pandemic, it’s the narrative that compels action, rather than the numbers alone. As more data is collected and analyzed, communication and storytelling will become even more integral in the data science discipline because of their role in separating the signal from the noise.
Yet this can be an area where data scientists struggle. In Anaconda’s 2020 State of Data Science survey of more than 2,300 data scientists, nearly a quarter of respondents said that their data science or machine learning (ML) teams lacked communication skills. This may be one reason why roughly 40% of respondents said they were able to effectively demonstrate business impact “only sometimes” or “almost never.”
The best data practitioners must be as skilled in storytelling as they are in coding and deploying models — and yes, this extends beyond creating visualizations to accompany reports. Here are some recommendations for how data scientists can situate their results within larger contextual narratives.
Ever-growing datasets help machine learning models better understand the scope of a problem space, but more data does not necessarily help with human comprehension. Even for the most left-brain of thinkers, it’s not in our nature to understand large abstract numbers or things like marginal improvements in accuracy. This is why it’s important to include points of reference in your storytelling that make data tangible.
For example, throughout the pandemic, we’ve been bombarded with countless statistics around case counts, death rates, positivity rates, and more. While all of this data is important, tools like interactive maps and conversations around reproduction numbers are more effective than massive data dumps in terms of providing context, conveying risk, and, consequently, helping change behaviors as needed. In working with numbers, data practitioners have a responsibility to provide the necessary structure so that the data can be understood by the intended audience.
If the definition of insanity is doing the same thing over and over and expecting a different outcome, then one might say the cybersecurity industry is insane.
Criminals continue to innovate with highly sophisticated attack methods, but many security organizations still use the same technological approaches they did 10 years ago. The world has changed, but cybersecurity hasn’t kept pace.
Distributed systems, with people and data everywhere, mean the perimeter has disappeared. And the hackers couldn’t be more excited. The same technology approaches, like correlation rules, manual processes, and reviewing alerts in isolation, do little more than remedy symptoms while hardly addressing the underlying problem.
Credentials are supposed to be the front gates of the castle, but as the SOC is failing to change, it is failing to detect. The cybersecurity industry must rethink its strategy to analyze how credentials are used and stop breaches before they become bigger problems.
Compromised credentials have long been a primary attack vector, but the problem has only grown worse in the mid-pandemic world. The acceleration of remote work has increased the attack footprint as organizations struggle to secure their network while employees work from unsecured connections. In April 2020, the FBI said that cybersecurity attacks reported to the organization grew by 400% compared to before the pandemic. Just imagine where that number is now in early 2021.
It only takes one compromised account for an attacker to enter the active directory and create their own credentials. In such an environment, all user accounts should be considered as potentially compromised.
Nearly all of the hundreds of breach reports I’ve read have involved compromised credentials. More than 80% of hacking breaches are now enabled by brute force or the use of lost or stolen credentials, according to the 2020 Data Breach Investigations Report. The most effective and commonly-used strategy is credential stuffing attacks, where digital adversaries break in, exploit the environment, then move laterally to gain higher-level access.
Google’s historical collection of location data has got it into hot water in Australia where a case brought by the country’s Competition and Consumer Commission (ACCC) has led to a federal court ruling that the tech giant misled consumers by operating a confusing dual-layer of location settings in what the regulator describes as a “world-first enforcement action”.
The case relates to personal location data collected by Google through Android mobile devices between January 2017 and December 2018.
Per the ACCC, the court ruled that “when consumers created a new Google Account during the initial set-up process of their Android device, Google misrepresented that the ‘Location History’ setting was the only Google Account setting that affected whether Google collected, kept or used personally identifiable data about their location”.
“In fact, another Google Account setting titled ‘Web & App Activity’ also enabled Google to collect, store and use personally identifiable location data when it was turned on, and that setting was turned on by default,” it wrote.
The Court also ruled that Google misled consumers when they later accessed the ‘Location History’ setting on their Android device during the same time period to turn that setting off because it did not inform them that by leaving the ‘Web & App Activity’ setting switched on, Google would continue to collect, store and use their personally identifiable location data.
“Similarly, between 9 March 2017 and 29 November 2018, when consumers later accessed the ‘Web & App Activity’ setting on their Android device, they were misled because Google did not inform them that the setting was relevant to the collection of personal location data,” the ACCC added.
Similar complaints about Google’s location data processing being deceptive — and allegations that it uses manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes — have been raised by consumer agencies in Europe for years. And in February 2020 the company’s lead data regulator in the region finally opened an investigation. However that probe remains ongoing.
Whereas the ACCC said today that it will be seeking “declarations, pecuniary penalties, publications orders, and compliance orders” following the federal court ruling. Although it added that the specifics of its enforcement action will be determined “at a later date”. So it’s not clear exactly when Google will be hit with an order — nor how large a fine it might face.
The tech giant may also seek to appeal the court ruling.
Google said today it’s reviewing its legal options and considering a “possible appeal” — highlighting the fact the Court did not agree wholesale with the ACCC’s case because it dismissed some of the allegations (related to certain statements Google made about the methods by which consumers could prevent it from collecting and using their location data, and the purposes for which personal location data was being used by Google).
Here’s Google’s statement in full:
“The court rejected many of the ACCC’s broad claims. We disagree with the remaining findings and are currently reviewing our options, including a possible appeal. We provide robust controls for location data and are always looking to do more — for example we recently introduced auto delete options for Location History, making it even easier to control your data.”
While Mountain View denies doing anything wrong in how it configures location settings — while simultaneously claiming it’s always looking to improve the controls it offers its users — Google’s settings and defaults have, nonetheless, got it into hot water with regulators before.
Back in 2019 France’s data watchdog, the CNIL, fined it $57M over a number of transparency and consent failures under the EU’s General Data Protection Regulation. That remains the largest GDPR penalty issued to a tech giant since the regulation came into force a little under three years ago — although France has more recently sanctioned Google $120M under different EU laws for dropping tracking cookies without consent.
Australia, meanwhile, has forged ahead with passing legislation this year that directly targets the market power of Google (and Facebook) — passing a mandatory news media bargaining code in February which aims to address the power imbalance between platform giants and publishers around the reuse of journalism content.
Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.
Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).
Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.
The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.
Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.
Facebook has been contacted for comment on the litigation.
The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.
A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true of Facebook.
With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.
(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).
Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.
Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.
That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claimed to fix by September 2019 — which led to the leak of 533M accounts now — suggests it should face a higher sanction from the DPC than Twitter received.
However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only now a few days old.
Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.
“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.
It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.
It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.
In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.
Facebook, meanwhile, has sought to play down the breach it failed to disclose — claiming it’s ‘old data’ — a deflection that ignores the fact that dates of birth don’t change (nor do most people routinely change their mobile number or email address).
Plenty of the ‘old’ data exposed in this latest massive Facebook data leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.
Senator Ron Wyden (D-OR) has proposed a draft bill that would limit the types of information that could be bought and sold by tech companies abroad, and the countries it could be legally sold in. The legislation is imaginative and not highly specific, but it indicates growing concern at the federal level over the international data trade.
“Shady data brokers shouldn’t get rich selling Americans’ private data to foreign countries that could use it to threaten our national security,” said Sen. Wyden in a statement accompanying the bill. They probably shouldn’t get rich selling Americans’ private data at all, but national security is a good way to grease the wheels.
The Protecting Americans’ Data From Foreign Surveillance Act would be a first step toward categorizing and protecting consumer data as a commodity that’s traded on the global market. Right now there are few if any controls over what data specific to a person — buying habits, movements, political party — can be sold abroad.
This means that, for instance, an American data broker could sell the preferred brands and home addresses of millions of Americans to, say, a Chinese bank doing investment research. Some of this trade is perfectly innocuous, even desirable in order to promote global commerce, but at what point does it become dangerous or exploitative?
There isn’t any official definition of what should and shouldn’t be sold to whom, the way we limit sales of certain intellectual property or weapons. The proposed law would first direct the secretary of Commerce to identify the data we should be protecting and to whom it should be protected against.
The general shape of protected data would be that which “if exported by third parties, could harm U.S. national security.” The countries that would be barred from receiving it would be those with inadequate data protection and export controls, recent intelligence operations against the U.S. or laws that allow the government to compel such information to be handed over to them. Obviously this is aimed at the likes of China and Russia, though ironically the U.S. fits the bill pretty well itself.
There would be exceptions for journalism and First Amendment-protected speech, and for encrypted data — for example storing encrypted messages on servers in one of the targeted countries. The law would also create penalties for executives “who knew or should have known” that their company was illegally exporting data, and creates pathways for people harmed or detained in a foreign country owing to illegally exported data. That might be if, say, another country used an American facial recognition service to spot, stop and arrest someone before they left.
If this all sounds a little woolly, it is — but that’s more or less on purpose. It is not for Congress to invent such definitions as are necessary for a law like this one; that duty falls to expert agencies, which must conduct studies and produce reports that Congress can refer to. This law represents the first handful of steps along those lines: getting the general shape of things straight and giving fair warning that certain classes of undesirable data commerce will soon be illegal — with an emphasis on executive responsibility, something that should make tech companies take notice.
The legislation would need to be sensitive to existing arrangements by which companies spread out data storage and processing for various economic and legal reasons. Free movement of data is to a certain extent necessary for globe-spanning businesses that must interact with one another constantly, and to hobble those established processes with red tape or fees might be disastrous to certain locales or businesses. Presumably this would all come up during the studies, but it serves to demonstrate that this is a very complex, not to say delicate, digital ecosystem the law would attempt to modify.
We’re in the early stages of this type of regulation, and this bill is just getting started in the legislative process, so expect a few months at the very least before we hear anything more on this one.
Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.
Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.
Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.
For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.
If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.
It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.
Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.
GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.
That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including: