FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — April 4th 2020Your RSS feeds

Before suing NSO Group, Facebook allegedly sought their software to better spy on users

By Devin Coldewey

Facebook’s WhatsApp is in the midst of a lawsuit against Israeli mobile surveillance outfit NSO Group. But before complaining about the company’s methods, Facebook seems to have wanted to use them for its own purposes, according to testimony from NSO founder Shalev Hulio.

Last year brought news of an exploit that could be used to install one of NSO’s spyware packages, Pegasus, on devices using WhatsApp. The latter sued the former over it, saying that over a hundred human rights activists, journalists and others were targeted using the method.

Last year also saw Facebook finally shut down Onavo, the VPN app it purchased in 2013 and developed into a backdoor method of collecting all manner of data about its users — but not as much as they’d have liked, according to Hulio. In a document filed with the court yesterday he states that Facebook in 2017 asked NSO Group for help collecting data on iOS devices resistant to the usual tricks:

In October 2017, NSO was approached by two Facebook representatives who asked to purchase the right to use certain capabilities of Pegasus, the same NSO software discussed in Plaintiffs’ Complaint.

The Facebook representatives stated that Facebook was concerned that its method for gathering user data through Onavo Protect was less effective on Apple devices than on Android devices. The Facebook representatives also stated that Facebook wanted to use purported capabilities of Pegasus to monitor users on Apple devices and were willing to pay for the ability to monitor Onavo Protect users. Facebook proposed to pay NSO a monthly fee for each Onavo Protect user.

NSO declined, as it claims to only provide its software to governments for law enforcement purposes. But there is a certain irony to Facebook wanting to employ against its users the very software it would later decry being employed against its users. (WhatsApp maintains some independence from its parent company but these events come well after the purchase by and organizational integration into Facebook.)

A Facebook representative did not dispute that representatives from the company approached NSO Group at the time, but said the testimony was an attempt to “distract from the facts” and contained “inaccurate representations about both their spyware and a discussion with people who work at Facebook.” We can presumably expect a fuller rebuttal in the company’s own filings soon.

Facebook and WhatsApp are, quite correctly, concerned that effective, secret intrusion methods like those developed and sold by NSO Group are dangerous in the wrong hands — as demonstrated by the targeting of activists and journalists, and potentially even Jeff Bezos. But however reasonable Facebook’s concerns are, the company’s status as the world’s most notorious collector and peddler of private information makes its righteous stance hard to take seriously.

Yesterday — April 3rd 2020Your RSS feeds

ZmURL customizes Zoom link previews with images & event sites

By Josh Constine

Sick of sharing those generic Zoom video call invites that all look the same? Wish your Zoom link preview’s headline and image actually described your meeting? Want to protect your Zoom calls from trolls by making attendees RSVP to get your link? ZmURL.com has you covered.

Launching today, ZmURL is a free tool that lets you customize your Zoom video call invite URL with a title, explanation, and image that will show up when you share the link on Twitter, Facebook, or elsewhere. zmurl also lets you require that attendees RSVP by entering their email address so can decide who to approve and provide with the actual entry link. That could stop Zoombombers from harassing your call with offensive screenshared imagery, profanity, or worse.

“We built zmurl.com to make it easier for people to stay physically distant but socially close” co-founder Victor Pontis tells me. “We’re hoping to give event organizers the tools to preserve in-person communities while we are all under quarantine.”

Zoom wasn’t built for open public discussions. But with people trapped inside by coronavirus, its daily user count has spiked from 10 million to 200 million. That’s led to new use cases from cocktail parties to roundtable discussions to AA meetings to school classes.

That’s unfortunately spawned new problems like “Zoombombing”, a term I coined two weeks ago to describe malicious actors tracking down public Zoom calls and bombarding them with abuse. Since then, the FBI has issued a warning about Zoombombing, the New York Times has written multiple articles about the issue, and Zoom’s CEO Eric Yuan has apologized.

Yet Zoom has been slow to adapt it features as it struggles not to buckle under its sudden scale. While it’s turned on waiting rooms and host-only screensharing by default for usage in schools, most people are still vulnerable due to Zoom’s permissive settings and reused URLs that were designed for only trusted enterprise meetings. Only today did Zoom concede to shifting the balance further from convenience to safety, turning on waiting rooms by default and requiring passwords for entry by Meeting ID.

Meanwhile, social networks have become a sea of indistinguishable Zoom links that all show the same blue and white logo in the preview with no information on what the call is about. That makes it a lot tougher to promote calls, which many musicians, fitness instructors, and event producers are relying on to drive donations or payments while their work is disrupted by quarantines.

ZmURL’s founders during their only in-person meeting ever

Luckily, Pontis and his co-founder Danqing Liu are here to help with zmurl. The two software engineers fittingly met over Zoom a year ago and have only met once in person. Pontis, now in San Francisco, had started bike and scooter rental software companies Spring and Scooter Map. Liu, from Beijing but now holed up in New York, had spent five years at Google, Uber, and PlanGrid before selling his machine learning tool TinyMind.

The idea for ZmURL stemmed from Danqin missing multiple Zoom events he’d wanted to attend. Then a friend of Pontis was laid off from their yoga instructor job, and they and their colleagues were scrambling to market and earn money from hosting their own classes over Zoom. The duo quickly built a beta with zero money raised and tested it with some yoga gurus who found it simplified promoting events and gathering RSVPs. “We’re all going through a tough time right now. We see zmurl as our opportunity to help” Pontis tells me.

To use the tool, you generate a generic meeting link from Zoom like zoom.us/ji/1231231232 and then punch it into ZmURL. You can upload an image or choose from stock photos and color gradients. Then you name you event, give it a description, and set the time and date. You’ll get a shorter URL like https://zmurl.com/smy5m or you can give it a custom one like zmurl.com/quidditch.

When you share that URL, it’ll show your image, headline, and description in the link preview on chat apps, social networks and more. Attendees who click will be shown a nicely rendered event page with the link to enter the Zoom call and the option to add it to their calendar. You can try it out here, zmurl.com/aloha, as the startup is hosting a happy hour today at 6pm Pacific.

Optionally, you can set your ZmURL calls to require an RSVP. In that case, people who click your link have to submit their email address. The host can then sift through the RSVPs and choose who to email back the link to join the call. If you see an RSVP from someone you don’t recognize, just ignore it to keep Zoombombers from slipping inside.

Surprisingly, there doesn’t seem to be any other tools for customizing Zoom call links. Zoom paid enterprise customers can only set up a image and logo-equipped landing page for their whole company’s Zoom account, not for specific calls. For now, ZmURL is completely free. But the co-founders are building out an option for hosting paid events that collect entry fees on the RSVP site while ZmURL takes a 5% cut.

Next, ZmURL wants to add the ability to link your Zoom account to its site so you can spawn call links without leaving. It’s also building out always-on call rooms, recurring events, organizer home pages for promoting all their calls, an option to add events to a public directory, email marketing tools, and integrations with other video call platforms like Hangouts, Skype, and FaceTime.

Pontis says the biggest challenge will be learning to translate more of the magic and business potential off offline events into the world of video calling. There’s also the risk that Zoom will try to intercede and force ZmURL to desist. But it shouldn’t, at least until Zoom builds all these features itself. Or it should just acquire ZmURL.

We’re dealing with an unprecedented behavior shift due to shelter-in-place orders that threaten to cripple the world economy and drive many of us crazy. Whether for fostering human connection or keeping event businesses afloat, Zoom has become a critical utility. It should accept all the help it can get.

Zoom will enable waiting rooms by default to stop Zoombombing

By Josh Constine

Zoom is making some drastic changes to prevent rampant abuse as trolls attack publicly-shared video calls. Starting April 5th, it will require passwords to enter calls via Meeting ID, since these may be guessed or reused. Meanwhile, it will change virtual waiting rooms to be on by default so hosts have to manually admit attendees.

The changes could prevent “Zoombombing”, a term I coined two weeks ago to describe malicious actors entering Zoom calls and disrupting them by screensharing offensive imagery. New Zoombombing tactics have since emerged, like spamming the chat thread with terrible GIFs, using virtual backgrounds to spread hateful messages, or just screaming profanities and slurs.

Just imagine the most frightened look on all these people’s faces. That’s what happened when Zoombombers attacked the call.

The FBI has issued a warning about the Zoombombing problem after children’s online classes, alcoholics anonymous meetings, and private business calls were invaded by trolls. Security researchers have revealed many ways that attackers can infiltrate a call.

The problems stem from Zoom being designed for trusted enterprise use cases rather than cocktail hours, yoga classes, roundtable discussions, and classes. But with Zoom struggling to scale its infrastructure as its daily user count has shot up from 10 million to 200 million over the past month due to coronavirus shelter-in-place orders, it’s found itself caught off guard.

Zoom CEO Eric Yuan apologized for the security failures this week and vowed changes. But at the time, the company merely said it would default to making screensharing host-only and keeping waiting rooms on for its K-12 education users. Clearly it determined that wasn’t sufficient, so now waiting rooms are on by default for everyone.

Zoom communicated the changes to users via an email sent this afternoon that explains “we’ve chosen to enable passwords on your meetings and turn on Waiting Rooms by default as additional security enhancements to protect your privacy.”

The company also explained that “For meetings scheduled moving forward, the meeting password can be found in the invitation. For instant meetings, the password will be displayed in the Zoom client. The password can also be found in the meeting join URL.” Some other precautions users can take include disabling file transfer, screensharing, or rejoining by removed attendees.

NEW YORK, NY – APRIL 18: Zoom founder Eric Yuan reacts at the Nasdaq opening bell ceremony on April 18, 2019 in New York City. The video-conferencing software company announced it’s IPO priced at $36 per share, at an estimated value of $9.2 billion. (Photo by Kena Betancur/Getty Images)

The shift could cause some hassle for users. Hosts will be distracted by having to approve attendees out of the waiting room while they’re trying to lead calls. Zoom recommends users resend invites with passwords attached for Meeting ID-based calls scheduled for after April 5th. Scrambling to find passwords could make people late to calls.

But that’s a reasonable price to pay to keep people from being scarred by Zoombombing attacks. The rash of trolling threatened to sour many people’s early experiences with the video chat platform just as it’s been having its breakout moment. A single call marred by disturbing pornography can leave a stronger impression than 100 peaceful ones with friends and colleagues.

Technologists will need to grow better at anticipating worst-case scenarios as their products go mainstream and are adapted to new use cases. Assuming everyone will have the best intentions ignores the reality of human nature. There’s always someone looking to generate a profit, score power, or cause chaos from even the smallest opportunity. Building development teams that include skeptics and realists, rather than just visionary idealists, could keep ensure products get safeguarded from abuse before rather than after a scandal occurs.

Google rolls back SameSite cookie changes to keep essential online services from breaking

By Frederic Lardinois

Google today announced that it will temporarily roll back the changes it recently made to how its Chrome browser handles cookies in order to ensure that sites that perform essential services like banking, online grocery, government services and healthcare won’t become inaccessible to Chrome users during the current COVID-19 pandemic.

The new SameSite rules, which the company started rolling out to a growing number of Chrome users in recent months, are meant to make it harder for sites to access cookies from third-party sites and hence track a user’s online activity. These new rules are also meant to prevent cross-site request forgery attacks.

Under Google’s new guidance, developers have to explicitly allow their cookies to be read by third-party sites, otherwise, the browser will prevent these third-party sites from accessing them.

Since this is a pretty major change, Google gave developers quite a bit of time to adapt their applications to it. Still, not every site is ready yet and so the Chrome team decided to halt the gradual rollout and stop enforcing these new rules for the time being.

“While most of the web ecosystem was prepared for this change, we want to ensure stability for websites providing essential services including banking, online groceries, government services and healthcare that facilitate our daily life during this time,” writes Google Chrome engineering director Justin Schuh. “As we roll back enforcement, organizations, users and sites should see no disruption.”

A Google spokesperson also told us that the team saw some breakage in sites “that would not normally be considered essential, but with COVID-19 having become more important, we made this decision in an effort to ensure stability during this time.”

The company says it plans to resume its SameSite enforcement over the summer, though the exact timing isn’t yet clear.

Google is now publishing coronavirus mobility reports, feeding off users’ location history

By Natasha Lomas

Google is giving the world a clearer glimpse of exactly how much it knows about people everywhere — using the coronavirus crisis as an opportunity to repackage its persistent tracking of where users go and what they do as a public good in the midst of a pandemic.

In a blog post today the tech giant announced the publication of what it’s branding ‘COVID-19 Community Mobility Reports‘. Aka an in-house analysis of the much more granular location data it maps and tracks to fuel its ad-targeting, product development and wider commercial strategy to showcase aggregated changes in population movements around the world.

The coronavirus pandemic has generated a worldwide scramble for tools and data to inform government responses. In the EU, for example, the European Commission has been leaning on telcos to hand over anonymized and aggregated location data to model the spread of COVID-19.

Google’s data dump looks intended to dangle a similar idea of public policy utility while providing an eyeball-grabbing public snapshot of mobility shifts via data pulled off of its global user-base.

In terms of actual utility for policymakers Google’s suggestions are pretty vague. The reports could help government and public health officials “understand changes in essential trips that can shape recommendations on business hours or inform delivery service offerings”, it writes.

“Similarly, persistent visits to transportation hubs might indicate the need to add additional buses or trains in order to allow people who need to travel room to spread out for social distancing,” it goes on. “Ultimately, understanding not only whether people are traveling, but also trends in destinations, can help officials design guidance to protect public health and essential needs of communities.”

The location data Google is making public is similarly fuzzy — to avoid inviting a privacy storm — with the company writing it’s using “the same world-class anonymization technology that we use in our products every day”, as it puts it.

“For these reports, we use differential privacy, which adds artificial noise to our datasets enabling high quality results without identifying any individual person,” Google writes. “The insights are created with aggregated, anonymized sets of data from users who have turned on the Location History setting, which is off by default.”

“In Google Maps, we use aggregated, anonymized data showing how busy certain types of places are—helping identify when a local business tends to be the most crowded. We have heard from public health officials that this same type of aggregated, anonymized data could be helpful as they make critical decisions to combat COVID-19,” it adds, tacitly linking an existing offering in Google Maps to a coronavirus-busting cause.

The reports consist of per country, or per state, downloads (with 131 countries covered initially), further broken down into regions/counties — with Google offering an analysis of how community mobility has changed vs a baseline average before COVID-19 arrived to change everything.

So, for example, a March 29 report for the whole of the US shows a 47 per cent drop in retail and recreation activity vs the pre-CV period; a 22% drop in grocery & pharmacy; and a 19% drop in visits to parks and beaches. While the same date report for California shows a considerably greater drop in the latter (down 38% compared to the regional baseline); and slightly bigger decreases in both retail and recreation activity (down 50%) and grocery & pharmacy (-24%).

Google says it’s using “aggregated, anonymized data to chart movement trends over time by geography, across different high-level categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential”. The trends are displayed over several weeks, with the most recent information representing 48-to-72 hours prior, it adds.

The company says it’s not publishing the “absolute number of visits” as a privacy step, adding: “To protect people’s privacy, no personally identifiable information, like an individual’s location, contacts or movement, is made available at any point.”

Google’s location mobility report for Italy, which remains the European country hardest hit by the virus, illustrates the extent of the change from lockdown measures applied to the population — with retail & recreation dropping 94% vs Google’s baseline; grocery & pharmacy down 85%; and a 90% drop in trips to parks and beaches.

The same report shows an 87% drop in activity at transit stations; a 63% drop in activity at workplaces; and an increase of almost a quarter (24%) of activity in residential locations — as many Italians stay at home, instead of commuting to work.

It’s a similar story in Spain — another country hard-hit by COVID-19. Though Google’s data for France suggests instructions to stay-at-home may not be being quite as keenly observed by its users there, with only an 18% increase in activity at residential locations and a 56% drop in activity at workplaces. Perhaps because the pandemic has so far had a less severe impact on France, although numbers of confirmed cases and deaths continue to rise across the region.

While policymakers have been scrambling for data and tools to inform their responses to COVID-19, privacy experts and civil liberties campaigners have rushed to voice concerns about the impacts of such data-fuelled efforts on individual rights, while also querying the wider utility of some of this tracking.

And yes, the disclaimer is very broad. I'd say, this is largely a PR move.

Apart from this, Google must be held accountable for its many other secondary data uses. And Google/Alphabet is far too powerful, which must be addressed at several levels, soon. https://t.co/oksJgQAPAY

— Wolfie Christl (@WolfieChristl) April 3, 2020

Contacts tracing is another area where apps are fast being touted as a potential solution to get the West out of economically crushing population lockdowns — opening up the possibility of people’s mobile devices becoming a tool to enforce lockdowns, as has happened in China.

“Large-scale collection of personal data can quickly lead to mass surveillance,” is the succinct warning of a trio of academics from London’s Imperial College’s Computational Privacy Group, who have compiled their privacy concerns vis-a-vis COVID-19 contacts tracing apps into a set of eight questions app developers should be asking.

Discussing Google’s release of mobile location data for a COVID-19 cause, the head of the group, Yves-Alexandre de Montjoye, gave a general thumbs up to the steps it’s taken to shrink privacy risks.

Although he also called for Google to provide more detail about the technical processes it’s using in order that external researchers can better assess the robustness of the claimed privacy protections. Such scrutiny is of pressing importance with so much coronavirus-related data grabbing going on right now, he argues.

“It is all aggregated, they normalize to a specific set of dates, they threshold when there are too few people and on top of this they add noise to make — according to them — the data differentially private. So from a pure anonymization perspective it’s good work,” de Montjoye told TechCrunch, discussing the technical side of Google’s release of location data. “Those are three of the big ‘levers’ that you can use to limit risk. And I think it’s well done.”

“But — especially in times like this when there’s a lot of people using data — I think what we would have liked is more details. There’s a lot of assumptions on thresholding, on how do you apply differential privacy, right?… What kind of assumptions are you making?” he added, querying how much noise Google is adding to the data, for example. “It would be good to have a bit more detail on how they applied [differential privacy]… Especially in times like this it is good to be… overly transparent.”

While Google’s mobility data release might appear to overlap in purpose with the Commission’s call for EU telco metadata for COVID-19 tracking, de Montjoye points out there are likely to be key differences based on the different data sources.

“It’s always a trade off between the two,” he says. “It’s basically telco data would probably be less fine-grained, because GPS is much more precise spatially and you might have more data points per person per day with GPS than what you get with mobile phone but on the other hand the carrier/telco data is much more representative — it’s not only smartphone, and it’s not only people who have latitude on, it’s everyone in the country, including non smartphone.”

There may be country specific questions that could be better addressed by working with a local carrier, he also suggested. (The Commission has said it’s intending to have one carrier per EU Member State providing anonymized and aggregated metadata.)

On the topical question of whether location data can ever be truly anonymized, de Montjoye — an expert in data reidentification — gave a “yes and no” response, arguing that original location data is “probably really, really hard to anonymize”.

“Can you process this data and make the aggregate results anonymous? Probably, probably, probably yes — it always depends. But then it also means that the original data exists… Then it’s mostly a question of the controls you have in place to ensure the process that leads to generating those aggregates does not contain privacy risks,” he added.

Perhaps a bigger question related to Google’s location data dump is around the issue of legal consent to be tracking people in the first place.

While the tech giant claims the data is based on opt-ins to location tracking the company was fined $57M by France’s data watchdog last year for a lack of transparency over how it uses people’s data.

Then, earlier this year, the Irish Data Protection Commission (DPC) — now the lead privacy regulator for Google in Europe — confirmed a formal probe of the company’s location tracking activity, following a 2018 complaint by EU consumers groups which accuses Google of using manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes.

“The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing,” said the DPC in a statement in February, announcing the investigation.

The legal questions hanging over Google’s consent to track likely explains the repeat references in its blog post to people choosing to opt in and having the ability to clear their Location History via settings. (“Users who have Location History turned on can choose to turn the setting off at any time from their Google Account, and can always delete Location History data directly from their Timeline,” it writes in one example.)

In addition to offering up coronavirus mobility porn reports — which Google specifies it will continue to do throughout the crisis — the company says it’s collaborating with “select epidemiologists working on COVID-19 with updates to an existing aggregate, anonymized dataset that can be used to better understand and forecast the pandemic”.

“Data of this type has helped researchers look into predicting epidemics, plan urban and transit infrastructure, and understand people’s mobility and responses to conflict and natural disasters,” it adds.

Before yesterdayYour RSS feeds

Using AI responsibly to fight the coronavirus pandemic

By Walter Thompson
Mark Minevich Contributor
Mark Minevich is president of Going Global Ventures, an advisor at Boston Consulting Group, a digital fellow at IPsoft and a leading global AI expert and digital cognitive strategist and venture capitalist.
Irakli Beridze Contributor
Irakli Beridze is head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI).

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding half a million and the number of deaths nearing 25,000 worldwide. It is a situation that will affect us all in one way or another.

With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.

For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.

In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.

Isolated cases or the new norm?

With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.

While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.

In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”

This is not an easy task, but a necessary one. So what can we do?

Ways to responsibly use AI to fight the coronavirus pandemic

  1. Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
  2. Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
  3. Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
  4. Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.

Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.

Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.

While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.

It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

Collibra nabs another $112.5M at a $2.3B valuation for its big data management platform

By Ingrid Lunden

GDPR and other data protection and privacy regulations — as well as a significant (and growing) number of data breaches and exposées of companies’ privacy policies — have put a spotlight on not just on the vast troves of data that businesses and other organizations hold on us, but also how they handle it. Today, one of the companies helping them cope with that data trove in a better and legal way is announcing a huge round of funding to continue that work. Collibra, which provides tools to manage, warehouse, store and analyse data troves, is today announcing that it has raised $112.5 million in funding, at a post-money valuation of $2.3 billion.

The funding — a Series F from the looks of it — represents a big bump for the startup, which last year raised $100 million at a valuation of just over $1 billion. This latest round was co-led by ICONIQ Capital, Index Ventures, and Durable Capital Partners LIP, with previous investors CapitalG (Google’s growth fund), Battery Ventures, and Dawn Capital also participating.

Collibra, originally a spin-out from Vrije Universiteit in Brussels, Belgium, today works with some 450 enterprises and other large organizations — customers include Adobe, Verizon (which owns TechCrunch), insurers AXA, and a number of healthcare providers. Its products cover a range of services focused around company data, including tools to help customers comply with local data protection policies, store it securely, and to run analytics and more.

These are all tools that have long had a place in enterprise big data IT, but have become increasingly more used and in-demand both as data policies have expanded, and as the prospects of what can be discovered through big data analytics have become more advanced. With that growth, many companies have realised that they are not in a position to use and store their data in the best possible way, and that is where companies like Collibra step in.

“Most large organizations are in data chaos,” Felix Van de Maele, co-founder and CEO, previously told us. “We help them understand what data they have, where they store it and [understand] whether they are allowed to use it.”

As you would expect with a big IT trend, Collibra is not the only company chasing this opportunity. Competitors include Informatica, IBM, Talend, Egnyte, among a number of others, but the market position of Collibra, and its advanced technology, is what has continued to impress investors.

“Durable Capital Partners invests in innovative companies that have significant potential to shape growing industries and build larger companies,” said Henry Ellenbogen, founder and chief investment officer for Durable Capital Partners LP, in a statement (Ellenbogen is formerly an investment manager a T. Rowe Price, and this is his first investment in Collibra under Durable). “We believe Collibra is a leader in the Data Intelligence category, a space that could have a tremendous impact on global business operations and a space that we expect will continue to grow as data becomes an increasingly critical asset.”

“We have a high degree of conviction in Collibra and the importance of the company’s mission to help organizations benefit from their data,” added Matt Jacobson, general partner at ICONIQ Capital and Collibra board member, in his own statement. “There is an increasing urgency for enterprises to harness their data for strategic business decisions. Collibra empowers organizations to use their data to make critical business decisions, especially in uncertain business environments.”

An EU coalition of techies is backing a ‘privacy-preserving’ standard for COVID-19 contacts tracing

By Natasha Lomas

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.

China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.

Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.

The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.

In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.

The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.

The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.

While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?

Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.

Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.

Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.

The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.

“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.

“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”

Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”

The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.

Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.

On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.

Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”

While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”

Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:

  • Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
  • Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
  • Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.

Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.

However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.

Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.

— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.

“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.

“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.

“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”

We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.

“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”

“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.

“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”

The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.

Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.

Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.

Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.

But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.

Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.

Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.

What does a pandemic say about the tech we’ve built?

By Natasha Lomas

There’s a joke* being reshared on chat apps that takes the form of a multiple-choice question — asking who’s the leading force in workplace digital transformation? The red-lined punchline is not the CEO or CTO, but: C) COVID-19.

There’s likely more than a grain of truth underpinning the quip. The novel coronavirus is pushing a lot of metaphorical buttons right now. “Pause” buttons for people and industries, as large swathes of the world’s population face quarantine conditions that can resemble house arrest. The majority of offline social and economic activities are suddenly off limits.

Such major pauses in our modern lifestyle may even turn into a full reset, over time. The world as it was, where mobility of people has been all but taken for granted — regardless of the environmental costs of so much commuting and indulged wanderlust — may never return to “business as usual.”

If global leadership rises to the occasion, then the coronavirus crisis offers an opportunity to rethink how we structure our societies and economies — to make a shift toward lower carbon alternatives. After all, how many physical meetings do you really need when digital connectivity is accessible and reliable? As millions more office workers log onto the day job from home, that number suddenly seems vanishingly small.

COVID-19 is clearly strengthening the case for broadband to be a utility — as so much more activity is pushed online. Even social media seems to have a genuine community purpose during a moment of national crisis, when many people can only connect remotely, even with their nearest neighbours.

Hence the reports of people stuck at home flocking back to Facebook to sound off in the digital town square. Now that the actual high street is off limits, the vintage social network is experiencing a late second wind.

Facebook understands this sort of higher societal purpose already, of course. Which is why it’s been so proactive about building features that nudge users to “mark yourself safe” during extraordinary events like natural disasters, major accidents and terrorist attacks. (Or indeed, why it encouraged politicians to get into bed with its data platform in the first place — no matter the cost to democracy.)

In less fraught times, Facebook’s “purpose” can be loosely summed to “killing time.” But with ever more sinkholes being drilled by the attention economy, that’s a function under ferocious and sustained attack.

Over the years the tech giant has responded by engineering ways to rise back to the top of the social heap — including spying on and buying up competition, or directly cloning rival products. It’s been pulling off this trick, by hook or by crook, for over a decade. Albeit, this time Facebook can’t take any credit for the traffic uptick; a pandemic is nature’s dark pattern design.

What’s most interesting about this virally disrupted moment is how much of the digital technology that’s been built out online over the past two decades could very well have been designed for living through just such a dystopia.

Seen through this lens, VR should be having a major moment. A face computer that swaps out the stuff your eyes can actually see with a choose-your-own-digital-adventure of virtual worlds to explore, all from the comfort of your living room? What problem are you fixing, VR? Well, the conceptual limits of human lockdown in the face of a pandemic quarantine right now, actually…

Virtual reality has never been a compelling proposition versus the rich and textured opportunity of real life, except within very narrow and niche bounds. Yet all of a sudden, here we all are — with our horizons drastically narrowed and real-life news that’s ceaselessly harrowing. So it might yet end up a wry punchline to another multiple choice joke: “My next vacation will be: A) Staycation, B) The spare room, C) VR escapism.”

It’s videoconferencing that’s actually having the big moment, though. Turns out even a pandemic can’t make VR go viral. Instead, long-lapsed friendships are being rekindled over Zoom group chats or Google Hangouts. And Houseparty — a video chat app — has seen surging downloads as barflies seek out alternative night life with their usual watering-holes shuttered.

Bored celebs are TikToking. Impromptu concerts are being live-streamed from living rooms via Instagram and Facebook Live. All sorts of folks are managing social distancing, and the stress of being stuck at home alone (or with family), by distant socializing: signing up to remote book clubs and discos; joining virtual dance parties and exercise sessions from bedrooms; taking a few classes together; the quiet pub night with friends has morphed seamlessly into a bring-your-own-bottle group video chat.

This is not normal — but nor is it surprising. We’re living in the most extraordinary time. And it seems a very human response to mass disruption and physical separation (not to mention the trauma of an ongoing public health emergency that’s killing thousands of people a day) to reach for even a moving pixel of human comfort. Contactless human contact is better than none at all.

Yet the fact all these tools are already out there, ready and waiting for us to log on and start streaming, should send a dehumanizing chill down society’s backbone.

It underlines quite how much consumer technology is being designed to reprogram how we connect with each other, individually and in groups, in order that uninvited third parties can cut a profit.

Back in the pre-COVID-19 era, a key concern being attached to social media was its ability to hook users and encourage passive feed consumption — replacing genuine human contact with voyeuristic screening of friends’ lives. Studies have linked the tech to loneliness and depression. Now that we’re literally unable to go out and meet friends, the loss of human contact is real and stark. So being popular online in a pandemic really isn’t any kind of success metric.

Houseparty, for example, self-describes as a “face to face social network” — yet it’s quite the literal opposite; you’re foregoing face-to-face contact if you’re getting virtually together in app-wrapped form.

The implication of Facebook’s COVID-19 traffic bump is that the company’s business model thrives on societal disruption and mainstream misery. Which, frankly, we knew already. Data-driven adtech is another way of saying it’s been engineered to spray you with ad-flavored dissatisfaction by spying on what you get up to. The coronavirus just hammers the point home.

The fact we have so many high-tech tools on tap for forging digital connections might feel like amazing serendipity in this crisis — a freemium bonanza for coping with terrible global trauma. But such bounty points to a horrible flip side: It’s the attention economy that’s infectious and insidious. Before “normal life” plunged off a cliff, all this sticky tech was labelled “everyday use;” not “break out in a global emergency.”

It’s never been clearer how these attention-hogging apps and services are designed to disrupt and monetize us; to embed themselves in our friendships and relationships in a way that’s subtly dehumanizing; re-routing emotion and connections; nudging us to swap in-person socializing for virtualized fuzz designed to be data-mined and monetized by the same middlemen who’ve inserted themselves unasked into our private and social lives.

Captured and recompiled in this way, human connection is reduced to a series of dilute and/or meaningless transactions; the platforms deploying armies of engineers to knob-twiddle and pull strings to maximize ad opportunities, no matter the personal cost.

It’s also no accident we’re seeing more of the vast and intrusive underpinnings of surveillance capitalism emerge, as the COVID-19 emergency rolls back some of the obfuscation that’s used to shield these business models from mainstream view in more normal times. The trackers are rushing to seize and colonize an opportunistic purpose.

Tech and ad giants are falling over themselves to get involved with offering data or apps for COVID-19 tracking. They’re already in the mass surveillance business, so there’s likely never felt like a better moment than the present pandemic for the big data lobby to press the lie that individuals don’t care about privacy, as governments cry out for tools and resources to help save lives.

First the people-tracking platforms dressed up attacks on human agency as “relevant ads.” Now the data industrial complex is spinning police-state levels of mass surveillance as pandemic-busting corporate social responsibility. How quick the wheel turns.

But platforms should be careful what they wish for. Populations that find themselves under house arrest with their phones playing snitch might be just as quick to round on high-tech gaolers as they’ve been to sign up for a friendly video chat in these strange and unprecedented times.

Oh, and Zoom (and others) — more people might actually read your “privacy policy” now they’ve got so much time to mess about online. And that really is a risk.

Every day there's a fresh Zoom privacy/security horror story. Why now, all at once?

It's simple: the problems aren't new but suddenly everyone is forced to use Zoom. That means more people discovering problems and also more frustration because opting out isn't an option. https://t.co/O9h8SHerok

— Arvind Narayanan (@random_walker) March 31, 2020

*Source is a private Twitter account called @MBA_ish

Maybe we shouldn’t use Zoom after all

By Zack Whittaker

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Three are several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

By Ingrid Lunden

Houseparty has been a smashing success with people staying home during the coronavirus pandemic who still want to connect with friends.

The group video chat app, interspersed with games and other bells and whistles, raises it above the more mundane Zooms and Hangouts (fun only in their names, otherwise pretty serious tools used by companies, schools and others who just need to work) when it comes to creating engaged leisure time, amid a climate where all of them are seeing a huge surge in growth.

All that looked like it could possibly fall apart for Houseparty and its new owner Epic Games when a series of reports appeared Monday claiming Houseparty was breached, and that malicious hackers were using users’ data to access their accounts on other apps such as Spotify and Netflix.

Houseparty was swift to deny the reports and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.

For now, there is no proof that there was a breach, nor proof that there was a paid smear campaign, and when we reached out to ask Houseparty and Epic about this investigation, a spokesperson said: “We don’t have anything to add here at the moment.”

But that doesn’t mean that Houseparty doesn’t have privacy issues.

As the old saying goes, “if the product is free, you are the product.” In the case of the free app Houseparty, the publishers detail a 12,000+ word privacy policy that covers any and all uses of data that it might collect by way of you logging on to or using its service, laying out the many ways that it might use data for promotional or commercial purposes.

There are some clear lines in the policy about what it won’t use. For example, while phone numbers might get shared for tech support, with partnerships that you opt into, to link up contacts to talk with and to authenticate you, “we will never share your phone number or the phone numbers of third parties in your contacts with anyone else.”

But beyond that, there are provisions in there that could see Houseparty selling anonymized and other data, leading Ray Walsh of research firm ProPrivacy to describe it as a “privacy nightmare.”

“Anybody who decides to use the Houseparty application to stay in contact during quarantine needs to be aware that the app collects a worrying amount of personal information,” he said. “This includes geolocation data, which could, in theory, be used to map the location of each user. A closer look at Houseparty’s privacy policy reveals that the firm promises to anonymize and aggregate data before it is shared with the third-party affiliates and partners it works with. However, time and time again, researchers have proven that previously anonymized data can be re-identified.”

There are ways around this for the proactive. Walsh notes that users can go into the settings to select “private mode” to “lock” rooms they use to stop people from joining unannounced or uninvited; switch locations off; use fake names and birthdates; disconnect all other social apps; and launch the app on iOS with a long press to “sneak into the house” without notifying all your contacts.

But with a consumer app, it’s a longshot to assume that most people, and the younger users who are especially interested in Houseparty, will go through all of these extra steps to secure their information.

Telco metadata grab is for modelling COVID-19 spread, not tracking citizens, says EC

By Natasha Lomas

As part of its response to the public health emergency triggered by the COVID-19 pandemic, the European Commission has been leaning on Europe’s telcos to share aggregate location data on their users.

The Commission kick-started a discussion with mobile phone operators about the provision of aggregated and anonymised mobile phone location data,” it said today.

“The idea is to analyse mobility patterns including the impact of confinement measures on the intensity of contacts, and hence the risks of contamination. This would be an important — and proportionate — input for tools that are modelling the spread of the virus, and would also allow to assess the current measures adopted to contain the pandemic.”

“We want to work with one operator per Member State to have a representative sample,” it added. “Having one operator per Member State also means the aggregated and anonymised data could not be used to track individual citizens, that is also not at all the intention. Simply because not all have the same operator.

“The data will only be kept as long as the crisis is ongoing. We will of course ensure the respect of the ePrivacy Directive and the GDPR.”

Earlier this week Politico reported that commissioner Thierry Breton held a conference with carriers, including Deutsche Telekom and Orange, asking for them to share data to help predict the spread of the novel coronavirus.

Europe has become a secondary hub for the disease, with high rates of infection in countries including Italy and Spain — where there have been thousands of deaths apiece.

The European Union’s executive is understandably keen to bolster national efforts to combat the virus. Although it’s less clear exactly how aggregated mobile location data can help — especially as more EU citizens are confined to their homes under national quarantine orders. (While police patrols and CCTV offer an existing means of confirming whether or not people are generally moving around.)

Nonetheless, EU telcos have already been sharing aggregate data with national governments.

Such as Orange in France which is sharing “aggregated and anonymized” mobile phone geolocation data with Inserm, a local health-focused research institute — to enable them to “better anticipate and better manage the spread of the epidemic”, as a spokeswoman put it.

“The idea is simply to identify where the populations are concentrated and how they move before and after the confinement in order to be able to verify that the emergency services and the health system are as well armed as possible, where necessary,” she added. “For instance, at the time of confinement, more than 1 million people left the Paris region and at the same time the population of Ile de Ré increased by 30%.

“Other uses of this data are possible and we are currently in discussions with the State on all of these points. But, it must be clear, we are extremely vigilant with regards to concerns and respect for privacy. Moreover, we are in contact with the CNIL [France’s data protection watchdog]… to verify that all of these points are addressed.”

Germany’s Deutsche Telekom is also providing what a spokesperson dubbed “anonymized swarm data” to national health authorities to combat the corona virus.

“European mobile operators are also to make such anonymized mass data available to the EU Commission at its request,” the spokesperson told us. “In fact, we will first provide the EU Commission with a description of data we have sent to German health authorities.”

It’s not entirely clear whether the Commission’s intention is to pool data from such existing local efforts — or whether it’s asking EU carriers for a different, universal data-set to be shared with it during the COVID-19 emergency.

When we asked about this it did not provide an answer. Although we understand discussions are ongoing with operators — and that it’s the Commission’s aim to work with one operator per Member State.

The Commission has said the metadata will be used for modelling the spread of the virus and for looking at mobility patterns to analyze and assess the impact of quarantine measures.

A spokesman emphasized that individual-level tracking of EU citizens is not on the cards.

“The Commission is in discussions with mobile operators’ associations about the provision of aggregated and anonymised mobile phone location data,” the spokesman for Breton told us.

“These data permit to analyse mobility patterns including the impact of confinement measures on the intensity of contacts and hence the risks of contamination. They are therefore an important and proportionate tool to feed modelling tools for the spread of the virus and also assess the current measures adopted to contain the Coronavrius pandemic are effective.”

“These data do not enable tracking of individual users,” he added. “The Commission is in close contact with the European Data Protection Supervisor (EDPS) to ensure the respect of the ePrivacy Directive and the GDPR.”

At this point there’s no set date for the system to be up and running — although we understand the aim is to get data flowing asap. The intention is also to use datasets that go back to the start of the epidemic, with data-sharing ongoing until the pandemic is over — at which point we’re told the data will be deleted.

Breton hasn’t had to lean very hard on EU telcos to share data for a crisis cause.

Earlier this week Mats Granryd, director general of operator association the GSMA, tweeted that its members are “committed to working with the European Commission, national authorities and international groups to use data in the fight against COVID-19 crisis”.

Although he added an important qualifier: “while complying with European privacy standards”.

The @GSMA and our members are committed to working with the @EU_Commission, national authorities and international groups to use data in the fight against COVID-19 crisis, while complying with European privacy standards. https://t.co/f1hBYT5Lqx

— Mats Granryd (@MatsGranryd) March 24, 2020

Europe’s data protection framework means there are limits on how people’s personal data can be used — even during a public health emergency. And while the legal frameworks do quite rightly bake in flexibility for a pressing public purpose, like the COVID-19 pandemic, it does not mean individuals’ privacy rights automatically go out the window.

Individual tracking of mobile users for contact tracing — such as Israel’s government is doing — is unimaginable at the pan-EU level. Certainly unless the regional situation deteriorates drastically.

One privacy lawyer we spoke to last week suggested such a level of tracking and monitoring across Europe would be akin to a “last resort”. Though individual EU countries are choosing to respond differently to the crisis — such as, for example, Poland giving quarantined people a choice between regular police checks up or uploading geotagged selfies to prove they’re not breaking lockdown.

While former EU Member, the UK, has reportedly chosen to invite US surveillance-as-a-service tech firm Palantir to carry out resource tracking for its National Health Service during the coronavirus crisis.

Under pan-EU law (which the UK remains subject to, until the end of the Brexit transition period), the rule of thumb is that extraordinary data-sharing — such as the Commission asking telcos to share user location data during a pandemic — must be “temporary, necessary and proportionate”, as digital rights group Privacy International recently noted.

This explains why Breton’s request is for “anonymous and aggregated” location data. And why, in background comments to reporters, the claim is that any shared data sets will be deleted at the end of the pandemic.

Not every EU lawmaker appears entirely aware of all the legal limits, however.

Today the bloc’s lead privacy regulator, data protection supervisor (EDPS) Wojciech Wiewiórowski, could be seen tweeting cautionary advice at one former commissioner, Andrus Ansip (now an MEP) — after the latter publicly eyed up a Bluetooth-powered contacts tracing app deployed in Singapore.

“Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder,” wrote Wiewiórowski.

So it remains to be seen whether pressure will mount for more privacy-intrusive surveillance of EU citizens if regional rates of infection continue to grow.

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.

— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

As we reported earlier this week, governments or EU institutions seeking to make use of mobile phone data to help with the response to the coronavirus must comply with the EU’s ePrivacy Directive — which covers the processing of mobile location data.

The ePrivacy Directive allows for Member States to restrict the scope of the rights and obligations related to location metadata privacy, and retain such data for a limited time — when such restriction constitutes “a necessary, appropriate and proportionate measure within a democratic society to safeguard national security (i.e. State security), defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system” — and a pandemic seems a clear example of a public security issue.

Thing is, the ePrivacy Directive is an old framework. The previous college of commissioners had intended to replace it alongside an update to the EU’s broader personal data protection framework — the General Data Protection Regulation (GDPR) — but failed to reach agreement.

This means there’s some potential mismatch. For example the ePrivacy Directive does not include the same level of transparency requirements as the GDPR.

Perhaps understandably, then, since news of the Commission’s call for carrier metadata emerged concerns have been raised about the scope and limits of the data sharing. Earlier this week, for example, MEP Sophie in’t Veld wrote to Breton asking for more information on the data grab — including querying exactly how the data will be anonymized.

Fighting the #coronavirus with technology: sure! But always with protection of our privacy. Read my letter to @ThierryBreton 👇 about @EU_Commission’s plans to call on telecoms to hand over data from people’s mobile phones in order to track&trace how the virus is spreading. pic.twitter.com/55kZo9bMhN

— Sophie in 't Veld (@SophieintVeld) March 25, 2020

The EDPS confirmed to us that the Commission consulted it on the proposed use of telco metadata.

A spokesman for the regulator pointed to a letter sent by Wiewiórowski to the Commission, following the latter’s request for guidance on monitoring the “spread” of COVID-19.

In the letter the EDPS impresses on the Commission the importance of “effective” data anonymization — which means it’s in effect saying a technique that does genuinely block re-identification of the data must be used. (There are plenty of examples of ‘anonymized’ location data being shown by researchers to be trivially easy to reidentify, given how many individual tells such data typically contains, like home address and workplace address.)

“Effective anonymisation requires more than simply removing obvious identifiers such as phone numbers and IMEI numbers,” warns the EDPS, adding too that aggregated data “can provide an additional safeguard”.

We also asked the Commission for more details on how the data will be anonymized and the level of aggregation that would be used — but it told us it could not provide further information at this stage. 

So far we understand that the anonymization and aggregation process will be undertaken before data is transferred by operators to a Commission science and research advisory body, called the Joint Research Centre (JRC) — which will perform the data analytics and modelling.

The results — in the form of predictions of propagation and so on — will then be shared by the Commission with EU Member States authorities. The datasets feeding the models will be stored on secure JRC servers.

The EDPS is equally clear on the Commission’s commitments vis-a-vis securing the data.

“Information security obligations under Commission Decision 2017/464 still apply [to anonymized data], as do confidentiality obligations under the Staff Regulations for any Commission staff processing the information. Should the Commission rely on third parties to process the information, these third parties have to apply equivalent security measures and be bound by strict confidentiality obligations and prohibitions on further use as well,” writes Wiewiórowski.

“I would also like to stress the importance of applying adequate measures to ensure the secure transmission of data from the telecom providers. It would also be preferable to limit access to the data to authorised experts in spatial epidemiology, data protection and data science.”

Data retention — or rather the need for prompt destruction of data sets after the emergency is over — is another key piece of the guidance.

“I also welcome that the data obtained from mobile operators would be deleted as soon as the current emergency comes to an end,” writes Wiewiórowski. “It should be also clear that these special services are deployed because of this specific crisis and are of temporary character. The EDPS often stresses that such developments usually do not contain the possibility to step back when the emergency is gone. I would like to stress that such solution should be still recognised as extraordinary.”

teresting to note the EDPS is very clear on “full transparency” also being a requirement, both of purpose and “procedure”. So we should expect more details to be released about how the data is being effectively rendered unidentifiable.

“Allow me to recall the importance of full transparency to the public on the purpose and procedure of the measures to be enacted,” writes Wiewiórowski. “I would also encourage you to keep your Data Protection Officer involved throughout the entire process to provide assurance that the data processed had indeed been effectively anonymised.”

The EDPS has also requested to see a copy of the data model. At the time of writing the spokesman told us it’s still waiting to receive that.

“The Commission should clearly define the dataset it wants to obtain and ensure transparency towards the public, to avoid any possible misunderstandings,” Wiewiórowski added in the letter.

A Norwegian school quit using video calls after a naked man ‘guessed’ the meeting link

By Zack Whittaker

A school in Norway has stopped using popular video conferencing service Whereby after a naked man apparently “guessed” the link to a video lesson.

According to Norwegian state broadcaster NRK, the man exposed himself in front of several young children over the video call. The theory, according to the report, is that the man guessed the meeting ID and joined the video call.

One expert quoted in the story said some are “looking” for links.

Last year security researchers told TechCrunch that malicious users could access and listen in to Zoom and Webex video meetings by cycling through different permutations of meeting IDs in bulk. The researchers said the flaw worked because many meetings were not protected by a passcode.

School and workplaces across the world are embracing remote teaching as the number of those infected by the coronavirus strain, known as COVID-19, continues to climb. There are some 523,000 confirmed cases of COVID-19 across the world as of Thursday, according to data provided by Johns Hopkins University. Norway currently has over 3,300 confirmed cases.

More than 80% of the world’s population is said to be on some kind of lockdown to help limit the spread of the coronavirus in an effort to prevent the overrunning of health systems.

The ongoing global lockdown has forced companies to embrace their staff working from home, pushing Zoom to become the go-to video conferencing platform for not only remote workers but also for recreation, like book clubs and happy hours.

An earlier version of this article incorrectly said the video service used by the school was Zoom. The video conferencing service was Whereby. 

DataGuard, which provides GDPR and privacy compliance-as-a-service, raises $20M

By Ingrid Lunden

Watchdogs have started to raise the issue that new working practices and online activity necessitated by the spread of the coronavirus pandemic are creating new sets of privacy, security and data protection challenges. Today a startup is announcing a growth round of funding to help online businesses navigate those issues better.

DataGuard, a Munich-based startup that provides “GDPR-as-a-service” — essentially a cloud-based platform to help those doing business online ensure that they are compliant with various regional regulations and best practices around privacy by analysing customers’ data processing activities, offering options and suggestions for improving privacy compliance, providing them with the ability to modify their IT infrastructure and internal processes to do so — has raised $20 million, money that it will be using to continue expanding its business across Europe and the Americas and to continue investing in building out its technology.

The funding is coming from a single investor, London’s One Peak, and is the first outside funding for the company. We’re asking but it looks like DataGuard is not disclosing its valuation with this round.

The news is coming at a critical time in the world of venture funding. We are seeing a mix of deals that either were closed or close to closing before the worst of the pandemic reared its ugly head (meaning: some deals are just going to be put on ice, Zoom meeting or not); or are being done specifically to help with business continuity in the wake of all the interruption of normal life (that is, the business is too interesting not to help prop it up); or are closing specifically because the startup has built something that is going to demonstrate just how useful it is in the months to come.

As with the strongest of funding rounds, DataGuard into a couple of those categories.

On one hand, it has demonstrated a demand for its services before any of this hit. Today, the startup provides privacy policy services both to small and medium businesses as well as larger enterprises, and it has picked up 1,000 customers since launching in 2017.

“Millions of companies are striving to comply with privacy regulation such as GDPR or CCPA,” said Thomas Regier, (pictured, left) who co-founded the company with Kivanc Semen (right), in a statement. “We are excited to partner with One Peak to help many more organizations across the globe become and remain privacy compliant. Our Privacy-as-a-Service solution provides customers with access to a proprietary full-stack platform and services from a dedicated team of data privacy experts. This enables customers to gain insights into their data processing activities and to operationalize privacy and compliance across their entire organization.” Regier tells us that the company was bootstrapped to 100 employees, which also underscores the company’s capital efficiency, also especially attractive at the moment.

On the other, the wholesale shift to more online and remote working, combined with a giant surge in online traffic caused by more people staying at home to reduce the number of new Covid-19 cases, is driving a lot more traffic and stress testing to websites, apps and other online services.

All that creates precisely the kind of environment where we might, for a period, overlook some of the trickier and more exacting aspects of privacy policies, but which are nonetheless important to keep intact, lest malicious hackers take advantage of vulnerable situations, or when we return to “normal” regulators refocus and come back with heavy fines, or consumers respond with bad PR and more.

“We have a truly horizontal product that has the potential to become an integral part of the tech stack in enterprises and SMBs alike,” said Semen in a statement. “We will use the funding to deliver on our product roadmap. We will achieve this in two ways: By increasing automation levels through improvements of the machine learning capabilities in our privacy software suite and by speeding up our development of new product categories.”

DataGuard is one of a number of startups that have emerged to help businesses navigate the waters of privacy regulations, which are usually not the core competencies of the companies but have become an essential part of how they can (and should) do business online.

Others include OneTrust, which also helps companies provide and run better data protection policies; and InCountry, which is specifically focused on providing services to help companies understand and comply with data protection policies that vary across different regions. OneTrust last year passed a $1 billion valuation, speaking to the huge opportunity and demand in this space.

OnePeak believes that DataGuard’s take on the proposition is one of the more effective and efficient, one reason it’s backed the team. “We are incredibly excited to back DataGuard’s world-class founding team,” says David Klein, Co-Founder and Managing Partner at One Peak, in a statement. “We are convinced that DataGuard’s cutting-edge software suite combined with its comprehensive service offering provides both enterprises and SMBs with an end-to-end solution that fulfils their data privacy needs across the board.”

Instagram launches Co-Watching of posts during video chat

By Josh Constine

Now you can scroll Instagram together with friends, turning a typically isolating, passive experience into something more social and active. Today Instagram launched Co-Watching, which lets friends on a video chat or group video chat browse through feed posts one user has Liked or Saved, or that Instagram recommends.

Co-Watching could let people ooh, ahh, joke, and talk about Instagram’s content instead of just consuming it solo and maybe posting it to a chat thread so friends can do the same. That could lead to long usage sessions, incentivize users to collect a great depository of Saved posts to share, and spur more video calls that drag people into the app. TechCrunch first reported Instagram was testing Co-Watching a year ago, so we’ll see if it managed to work out the technical and privacy questions of operating the feature.

The launch comes alongside other COVID-19 responses from Instagram that include:

-Showing a shared Instagram Story featuring all the posts from you network that include the “Stay Home” sticker

-Adding Story stickers that remind people to wash their hands or keep their distance from others

-Adding coronavirus educational info to the top of results for related searches

-Removing unofficial COVID-19 accounts from recommendations, as well as virus related content from Explore if it doesn’t come from a credible health organization

-Expanding the donation sticker to more countries so people can search for and ask friends for contributions to relevant non-profits

These updates build on Instagram’s efforts from two weeks ago which included putting COVID-19 prevention tips atop the feed, listing official health organizations atop search results, and demoting the reach of coronavirus-related content rated false by fact checkers.

But Co-Watching will remain a powerful feature long after the quarantines and social distancing end. The ability to co-view content while browsing social networks has already made screensharing app Squad popular. When Squad launched in January 2019, I suggested that “With Facebook and Snap already sniffing around Squad, it’s quite possible they’ll try to copy it.” Facebook tested a Watch Together feature for viewing Facebook Watch videos inside Messenger back in April. And now here we are with Instagram.

The question is whether Squad’s first-mover advantage and option to screenshare from any app will let it hold its own, or if Instagram Co-Watching will just popularize the concept and send users searching for more flexible options like Squad. “Everyone knows that the content flooding our feeds is a filtered version of reality” Squad CEO Esther Crawford told me. “The real and interesting stuff goes down in DMs because people are more authentic when they’re 1:1 or in small group conversations.”

With Co-Watching Instagram users can spill the tea and gossip about posts live and unfiltered over video chat. When people launch a video chat from the Direct inbox or a chat thread, they’ll see a “Posts” button that launches Co-Watching. They’ll be able to pick from their Liked, Saved, or Explore feeds and then reveal it to the video chat, with everyone’s windows lined up beneath the post.

Up to six people can Co-Watch at once on Instagram, consuming feed photos and videos but not IGTV posts. You can share public posts, or private ones that everyone in the chat are allowed to see. If one participant is blocked from viewing a post, it’s inelligible for Co-Watching.

Co-Watching could finally provide an answer to Instagram’s Time Well Spent problem. Research shows how the real danger in social network overuse is passive content consumption like endless solo feed scrolling. It can inspire envy, poor self-esteem, and leave users deflated, especially if the highlights of everyone else’s lives look more interesting than their own day-to-day reality. But active sharing, commenting, and messaging can have a positive effect on well-being, making people feel like they have a stronger support network.

With Co-Watching, Instagram has found a way to turn the one-player experience into a multi-player game. Especially now with everyone stuck at home and unable to crowd around one person’s phone to gab about what they see, there’s a great need for this new feature. One concern is that it could be used for bullying, with people all making fun of someone’s posts.

But in general, the idea of sifting through cute animal photos, dance tutorials, or epic art could take the focus off of the individuals in a video chat. Not having one’s face as the center of attention could make video chat less performative and exhausting. Instead, Co-Watching could let us do apart what we love to do together: just hang out.

One neat plug-in to join a Zoom call from your browser

By Natasha Lomas

Want to join a Zoom meeting in the browser without having to download its app to do so? Check out this browser plug-in — which short-cuts the needless friction the videoconferencing company has baked into the process of availing yourself of its web client.

As we noted last week Zoom does have a zero download option — it just hides it really well, preferring to push people to download its app. It’s pretty annoying to say the least. Some have even called it irresponsible, during the coronavirus pandemic, given how many people are suddenly forced to work from home — where they may be using locked down corporate laptops that don’t allow them to download apps.

Software engineer, Arkadiy Tetelman — currently the head of appsec/infrasec for US mobile bank Chimewas one of those who got annoyed by Zoom hiding the join via browser option. So he put together this nice little Zoom Redirector browser extension — that “transparently redirects any meeting links to use Zoom’s browser based web client”, as he puts it on Github.

“When joining a Zoom meeting, the ‘join from your browser’ link is intentionally hidden,” he warns. “This browser extension solves this problem by transparently redirecting any meeting links to use Zoom’s browser based web client.”

It kills me that Zoom intentionally hides the "join from your browser" link, so here's a small (20 line) browser extension that transparently redirects Zoom links to use their web client: https://t.co/ZeYmmS2R2A https://t.co/50f6ak4i9x

— Arkadiy Tetelman (@arkadiyt) March 22, 2020

So far the extension is available for Chrome and Firefox. At the time of writing submissions are listed as pending for Opera and Edge.

As others have noted, it does remain possible to perform a redirect manually, by adding your meeting ID to a Zoom web client link — zoom.us/wc/join/{your-meeting-id} — though if you’re being asked to join a bunch of Zoom meetings it’s clearly a lot more convenient to have a browser plug-in take the strain for you vs saddling yourself with copypasting meeting IDs. 

While the COVID-19 pandemic has generally fuelled the use of videoconferencing, Zoom appears to be an early beneficiary — with the app enjoying a viral boom (in the digital sense of the term) in recent weeks that’s been great for earnings growth (if not immediately for its share price when it reported its Q4 bounty). And unsurprisingly it’s forecasting a bumper year.

But it’s not all positive vibes or Zoom right now. Another area where the company has faced critical attention in recent days relates to user privacy.

Over the weekend another Twitter user, going by the handle @ouren, posted a critical thread that garnered thousands of likes and retweets — detailing how Zoom can track activity on the user’s computer, including harvesting data on what other programs are running and which window the user has in the foreground.

Everyone working remotely:

ZOOM monitors the activity on your computer and collects data on the programs running and captures which window you have focus on.

If you manage the calls, you can monitor what programs users on the call are running as well. It's fucked up.

— Wolfgang ʬ (@Ouren) March 21, 2020

The thread included a link to an EFF article about the privacy risks of remote working tools, including Zoom.

“The host of a Zoom call has the capacity to monitor the activities of attendees while screen-sharing,” the digital rights group warned. “This functionality is available in Zoom version 4.0 and higher. If attendees of a meeting do not have the Zoom video window in focus during a call where the host is screen-sharing, after 30 seconds the host can see indicators next to each participant’s name indicating that the Zoom window is not active.”

Given the sudden spike in attention around privacy, Zoom chipped into the discussion with an official response, writing that the “attention tracking feature is off by default”.

“Once enabled, hosts can tell if participants have the App open and active when the screen-sharing feature is in use,” it added. “It does not track any aspects of your audio/video or other applications on your window.”

Hi, attention tracking feature is off by default – once enabled, hosts can tell if participants have the App open and active when the screen-sharing feature is in use. It does not track any aspects of your audio/video or other applications on your window. https://t.co/sWWfrsXe42

— Zoom (@zoom_us) March 22, 2020

However the company did not explain why it offers such a privacy hostile feature as “attention tracking” in the first place.

Grindr sold by Chinese owner after US raised national security concerns

By Zack Whittaker

Chinese gaming giant Beijing Kunlun has agreed to sell popular gay dating app Grindr for about $608 million, ending a tumultuous four years under Chinese ownership.

Reuters reports that the Chinese company sold its 98% stake in Grindr to a U.S.-based company, San Vicente Acquisition Partners.

The app, originally developed in Los Angeles, raised national security concerns after it was acquired by Beijing Kunlun in 2016 for $93 million. That ownership was later scrutinized by a U.S. government national security panel, the Committee on Foreign Investment in the United States (CFIUS), which reportedly told the Beijing-based parent company that its ownership of Grindr constituted a national security threat.

CFIUS expressed concern that data from the app’s some 27 million users could be used by the Chinese government. Last year, it was reported that while under Chinese ownership, Grindr allowed engineers in Beijing access to the personal data of millions of U.S. users, including their private messages and HIV status.

Beijing Kunlun had agreed to sell the unit by June.

Little is known about San Vicente Acquisition, but a person with knowledge of the deal said that the company is made up of a group of investors that’s fully owned and controlled by Americans. Reuters said that one of those investors is James Lu, a former executive at Chinese search giant Baidu.

The deal is subject to shareholder approval and a review by CFIUS.

A spokesperson for Grindr declined to comment on the record.

Cathay Pacific fined £500k by UK’s ICO over data breach disclosed in 2018

By Natasha Lomas

Cathay Pacific has been issued with a £500,000 penalty by the UK’s data watchdog for security lapses which exposed the personal details of some 9.4 million customers globally — 111,578 of whom were from the UK.

The penalty, which is the maximum fine possible under relevant UK law, was announced today by the Information Commissioner’s Office (ICO), following a multi-month investigation. It pertains to a breach disclosed by the airline in fall 2018.

At the time Cathay Pacific said it had first identified unauthorized access to its systems in March, though it did not explain why it took more than six months to make a public disclosure of the breach.

The failure to secure its systems resulted in unauthorised access to passengers’ personal details, including names, passport and identity details, dates of birth, postal and email addresses, phone numbers and historical travel information.

Today the ICO said the earliest date of unauthorised access to Cathay Pacific’s systems was October 14, 2014. While the earliest known date of unauthorised access to personal data was February 7, 2015.

“The ICO found Cathay Pacific’s systems were entered via a server connected to the internet and malware was installed to harvest data,” the regulator writes in a press release, adding that it found “a catalogue of errors” during the investigation, including back-up files that were not password protected; unpatched Internet-facing servers; use of operating systems that were no longer supported by the developer; and inadequate antivirus protection.

Since Cathay’s systems were compromised in this breach the UK has transposed an update to the European Union’s data protection’s framework into its national law which bakes in strict disclosure requirements for breaches involving personal data — requiring data controllers inform national regulators within 72 hours of becoming aware of a breach.

The General Data Protection Regulation (GDPR) also includes a much more substantial penalties regime — with fines that can scale as high as 4% of global annual turnover.

However owing to the timing of the unauthorized access the ICO has treated this breach as falling under previous UK data protection legislation.

Under GDPR the airline would likely have faced a substantially larger fine.

Commenting on Cathay Pacific’s penalty in a statement, Steve Eckersley, the ICO’s director of investigations, said:

People rightly expect when they provide their personal details to a company, that those details will be kept secure to ensure they are protected from any potential harm or fraud. That simply was not the case here.

This breach was particularly concerning given the number of basic security inadequacies across Cathay Pacific’s system, which gave easy access to the hackers. The multiple serious deficiencies we found fell well below the standard expected. At its most basic, the airline failed to satisfy four out of five of the National Cyber Security Centre’s basic Cyber Essentials guidance.

Under data protection law organisations must have appropriate security measures and robust procedures in place to ensure that any attempt to infiltrate computer systems is made as difficult as possible.

Reached for comment the airline reiterated its regret over the data breach and said it has taken steps to enhance its security “in the areas of data governance, network security and access control, education and employee awareness, and incident response agility”.

“Substantial amounts have been spent on IT infrastructure and security over the past three years and investment in these areas will continue,” Cathay Pacific said in the statement. “We have co-operated closely with the ICO and other relevant authorities in their investigations. Our investigation reveals that there is no evidence of any personal data being misused to date. However, we are aware that in today’s world, as the sophistication of cyber attackers continues to increase, we need to and will continue to invest in and evolve our IT security systems.”

“We will continue to co-operate with relevant authorities to demonstrate our compliance and our ongoing commitment to protecting personal data,” it added.

Last summer the ICO slapped another airline, British Airways, with a far more substantial fine for a breach that leaked data on 500,000 customers, also as a result of security lapses.

In that case the airline faced a record £183.39M penalty — totalling 1.5% of its total revenues for 2018 — as the timing of the breach occurred when the GDPR applied.

FCC proposes $208M in fines for wireless carriers that sold your location for years

By Devin Coldewey

The FCC has officially and finally determined that the major wireless carriers in the U.S. broke the law by secretly selling subscribers’ location data for years with almost no constraints or disclosure. But its Commissioners decry the $208 million penalty proposed to be paid by these enormously rich corporations, calling it “not properly proportioned to the consumer harms suffered.”

Under the proposed fines, T-Mobile would pay $91M; AT&T, $57M; Verizon, $48M; and Sprint, $12M. (Disclosure: TechCrunch is owned by Verizon Media. This does not affect our coverage in the slightest.)

The case has stretched on for more than a year and a half after initial reports that private companies were accessing and selling real-time subscriber location data to anyone willing to pay. Such a blatant abuse of consumers’ privacy caused an immediate outcry, and carriers responded with apparent chagrin — but often failed to terminate or even evaluate these programs in a timely fashion. It turns out they were run with almost no oversight at all, with responsibility delegated to the third party companies to ensure compliance.

Meanwhile the FCC was called on to investigate the nature of these offenses, and spent more than a year doing so in near-total silence, with even its own Commissioners calling out the agency’s lack of communication on such a serious issue.

Finally, in January, FCC Chairman Ajit Pai — who, it really must be noted here, formerly worked for one of the main companies implicated, Securus — announced that the investigation had found the carriers had indeed violated federal law and would soon be punished.

Today brings the official documentation of the fines, as well as commentary from the Commission. In the documents, the carriers are described as not only doing something bad, but doing it poorly — and especially in T-Mobile’s case, continuing to do it well after they said they’d stop:

We find that T-Mobile apparently disclosed its customers’ location information, without their consent, to third parties who were not authorized to receive it. In addition, even after highly publicized incidents put the Company on notice that its safeguards for protecting customer location information were inadequate, T-Mobile apparently continued to sell access to its customers’ location information for the better part of a year without putting in place reasonable safeguards—leaving its customers’ data at unreasonable risk of unauthorized disclosure

The general feeling seems to be that while it’s commendable to recognize this violation and propose what could be considered  substantial fines, the whole thing is, as Commissioner Rosenworcel put it, “a day late and a dollar short.”

The scale of the fines, they say, has little to do with the scale of the offenses — and that’s because the investigation did not adequately investigate or attempt to investigate the scale of those offenses. As Commissioner Starks writes in a lengthy statement:

After all these months of investigation, the Commission still has no idea how many consumers’ data was mishandled by each of the carriers.

We had the power—and, given the length of this investigation, the time—to compel disclosures that would help us understand the true scope of the harm done to consumers. Instead, the Notices calculate the forfeiture based on the number of contracts between the carriers and location aggregators, as well as the number of contracts between those aggregators and third-party location-based service providers. That is a poor and unnecessary proxy for the privacy harm caused by each carrier, each of which has tens of millions of customers that likely had their personal data abused.

Essentially, the FCC didn’t even look at the number or nature of actual harm — it just asked the carriers to provide the number of contracts entered into. As Starks points out, one such contract can and did sometimes represent thousands of individual privacy invasions.

We know there are many—perhaps millions—of additional victims, each with their own harms. Unfortunately, based on the investigation the FCC conducted, we don’t even know how many there were, and the penalties we propose today do not reflect that impact.

And why not go after the individual companies? Securus, Starks says, “behaved outrageously.” But they’re not being fined at all. Even if the FCC lacked the authority to do so, it could have handed off the case to Justice or local authorities that could determine whether these companies violated other laws.

As Rosenworcel notes in her own statement, the fines are also extraordinarily generous even beyond this minimal method of calculating harm:

The agency proposes a $40,000 fine for the violation of our rules—but only on the first day. For every day after that, it reduces to $2,500 per violation. The FCC heavily discounts the fines the carriers potentially owe under the law and disregards the scope of the problem. On top of that, the agency gives each carrier a thirty-day pass from this calculation. This thirty day “get-out-of-jail-free” card is plucked from thin air.

Given that this investigation took place over such a long period, it’s strange that it did not seek to hear from the public or subpoena further details from the companies facilitating the violations. Meanwhile the carriers sought to declare a huge proportion of their responses to the FCC’s questions confidential, including publicly available information, and the agency didn’t question these assertions until Starks and Rosenworcel intervened.

$200M sounds like a lot, but divided among several billion-dollar communications organizations it’s peanuts, especially when you consider that these location-selling agreements may have netted far more than that in the years they were active. Only the carriers know exactly how many times their subscribers’ privacy was violated, and how much money they made from that abuse. And because the investigation has ended without the authority over these matters asking about it, we likely never will know.

The proposed fines, called a Notice of Apparent Liability, are only a tentative finding, and the carriers have 30 days to respond or ask for an extension — the latter of which is the more likely. Once they respond (perhaps challenging the amount or something else) the FCC can take as long as it wants to come up with a final fine amount. And once that is issued, there is no requirement that the fine actually be collected — and the FCC has in fact declined to collect before once the heat died down, though not with a penalty of this scale.

“While I am glad the FCC is finally proposing fines for this egregious behavior, it represents little more than the cost of doing business for these carriers,” Congressman Frank Pallone (D-NJ) said in a statement. “Further, the Commission is still a long way from collecting these fines and holding the companies fully accountable.”

The only thing that led to this case being investigated at all was public attention, and apparently public attention is necessary to ensure the federal government follows through on its duties.

(This article has been substantially updated with new information, plus comments from Commissioner Starks and Rep. Pallone.)

Clearview said its facial recognition app was only for law enforcement as it courted private companies

By Taylor Hatmaker

After claiming that it would only sell its controversial facial recognition software to law enforcement agencies, a new report suggests that Clearview AI is less than discerning about its client base. According to Buzzfeed News, the small, secretive company looks to have shopped its technology far and wide. While Clearview counts ICE, the U.S. Attorney’s Office for the Southern District of New York and the retail giant Macy’s among its paying customers, many more private companies are testing the technology through 30-day free trials. Non-law enforcement entities that appeared on Clearview’s client list include Walmart, Eventbrite, the NBA, Coinbase, Equinox, and many others.

According to the report, even if a company or organization has no formal relationship with Clearview, its individual employees might be testing the software. “In some cases… officials at a number of those places initially had no idea their employees were using the software or denied ever trying the facial recognition tool,” Buzzfeed News reports.

In one example, the NYPD denied a relationship with Clearview, even as as many as 30 officers within the department conducted 11,000 searches through the software, according to internal logs.

A week ago, Clearview’s CEO Hoan Ton-That was quoted on Fox Business stating that his company’s technology is “strictly for law enforcement”—a claim the company’s budding client list appears to contradict.

“This list, if confirmed, is a privacy, security, and civil liberties nightmare,” ACLU Staff Attorney Nathan Freed Wessler said of the revelations. “Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse.”

On top of its reputation as an invasive technology, critics argue that facial recognition tech isn’t accurate enough to be used in the high-consequence settings it’s often touted for. Facial recognition software has notoriously struggled to accurately identify non-white, non-male faces, a phenomenon that undergirds arguments that biased data has the potential to create devastating real-world consequences.

Little is known about the technology that powers Clearview’s own algorithms and accuracy beyond that the company scrapes public images from many online sources, aggregates that data, and allows users to search it for matches. In light of Clearview’s reliance on photos from social networks, Facebook, YouTube, and Twitter have all issued the company cease-and-desist letters for violating their terms of use.

Clearview’s small pool of early investors includes the private equity firm Kirenaga Partners and famed investor and influential tech conservative Peter Thiel. Thiel, who sits on the board of Facebook, also co-founded Palantir, a data analytics company that’s become a favorite of law enforcement.

❌