Maryland and Montana have become the first U.S. states to pass laws that make it tougher for law enforcement to access DNA databases.
The new laws, which aim to safeguard the genetic privacy of millions of Americans, focus on consumer DNA databases, such as 23andMe, Ancestry, GEDmatch and FamilyTreeDNA, all of which let people upload their genetic information and use it to connect with distant relatives and trace their family tree. While popular — 23andMe has more than three million users, and GEDmatch more than one million — many are unaware that some of these platforms share genetic data with third parties, from the pharmaceutical industry and scientists to law enforcement agencies.
When used by law enforcement through a technique known as forensic genetic genealogy searching (FGGS), officers can upload DNA evidence found at a crime scene to make connections on possible suspects, the most famous example being the identification of the Golden State Killer in 2018. This saw investigators upload a DNA sample taken at the time of a 1980 murder linked to the serial killer into GEDmatch and subsequently identify distant relatives of the suspect — a critical breakthrough that led to the arrest of Joseph James DeAngelo.
While law enforcement agencies have seen success in using consumer DNA databases to aid with criminal investigations, privacy advocates have long warned of the dangers of these platforms. Not only can these DNA profiles help trace distant ancestors, but the vast troves of genetic data they hold can divulge a person’s propensity for various diseases, predict addiction and drug response, and even be used by companies to create images of what they think a person looks like.
Ancestry and 23andMe have kept their genetic databases closed to law enforcement without a warrant, GEDmatch (which was acquired by a crime scene DNA company in December 2019) and FamilyTreeDNA have previously shared their database with investigators.
To ensure the genetic privacy of the accused and their relatives, Maryland will, starting October 1, require law enforcement to get a judge’s sign-off before using genetic genealogy, and will limit its use to serious crimes like murder, kidnapping, and human trafficking. It also says that investigators can only use databases that explicitly tell users that their information could be used to investigate crimes.
In Montana, where the new rules are somewhat narrower, law enforcement would need a warrant before using a DNA database unless the users waived their rights to privacy.
The laws “demonstrate that people across the political spectrum find law enforcement use of consumer genetic data chilling, concerning and privacy-invasive,” said Natalie Ram, a law professor at the University of Maryland. “I hope to see more states embrace robust regulation of this law enforcement technique in the future.”
The introduction of these laws has also been roundly welcomed by privacy advocates, including the Electronic Frontier Foundation. Jennifer Lynch, surveillance litigation director at the EFF, described the restrictions as a “step in the right direction,” but called for more states — and the federal government — to crack down further on FGGS.
“Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it,” Lynch said.
“Companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country.”
A spokesperson for 23andMe told TechCrunch: “We fully support legislation that provides consumers with stronger privacy protections. In fact we are working on legislation in a number of states to increase consumer genetic privacy protections. Customer privacy and transparency are core principles that guide 23andMe’s approach to responding to legal requests and maintaining customer trust. We closely scrutinize all law enforcement and regulatory requests and we will only comply with court orders, subpoenas, search warrants or other requests that we determine are legally valid. To date we have not released any customer information to law enforcement.”
GEDmatch and FamilyTreeDNA, both of which opt users into law enforcement searches by default, told the New York Times that they have no plans to change their existing policies around user consent in response to the new regulation.
Ancestry did not immediately comment.
The Cybersecurity and Infrastructure Security Agency has launched a vulnerability disclosure program allowing ethical hackers to report security flaws to federal agencies.
The platform, launched with the help of cybersecurity companies Bugcrowd and Endyna, will allow civilian federal agencies to receive, triage and fix security vulnerabilities from the wider security community.
The move to launch the platform comes less than a year after the federal cybersecurity agency, better known as CISA, directed the civilian federal agencies that it oversees to develop and publish their own vulnerability disclosure policies. These policies are designed to set the rules of engagement for security researchers by outlining what (and how) online systems can be tested, and which can’t be.
It’s not uncommon for private companies to run VDP programs to allow hackers to report bugs, often in conjunction with a bug bounty to pay hackers for their work. The U.S. Department of Defense has for years warmed to hackers, the civilian federal government has been slow to adopt.
Bugcrowd, which last year raised $30 million at Series D, said the platform will “give agencies access to the same commercial technologies, world-class expertise, and global community of helpful ethical hackers currently used to identify security gaps for enterprise businesses.”
The platform will also help CISA share information about security flaws between other agencies.
The platform launches after a bruising few months for government cybersecurity, including a Russian-led espionage campaign against at least nine U.S. federal government agencies by hacking software house SolarWinds, and a China-linked cyberattack that backdoored thousands of Microsoft Exchange servers, including in the federal government.
Through its Ministry of Information and Culture today, the Nigerian government announced its decision to suspend the activities of social media platform Twitter in the country.
The statement, made by Minister of Information and Culture, Lai Mohammed, and signed off by his media aide Segun Adeyemi, could see telecoms in the country prevent Nigerians from using Twitter.
Here’s the statement issued by the ministry:
The Federal Government has suspended indefinitely the operations of the microblogging and social networking service Twitter in Nigeria. The Minister of Information and Culture, Alhaji Lai Mohammed, announced the suspension in a statement issued in Abuja on Friday, citing the presistent use of the platform for activities that are capable of undermining Nigeria’s corporate existence.
The Minister said the Federal Government has also directed the National Broadcasting Commission (NBC) to immediately commence the process of licensing all OTT and social media operations in Nigeria.
Today’s announcement is a culmination of events that have happened this past week. Yesterday, Twitter deleted tweets and videos of President Muhammadu Buhari making threats of punishment to a sect called IPOB in the South-Eastern part of the country after he blamed them for attacks on government buildings. He then referenced Nigeria’s civil war events in the 1960s, which seemed to offend many Nigerians.
Buhari, who was the country’s Head of State in the 1980s and served in the army against secessionists, said young Nigerians in the country’s southeastern part were too young to remember the horrible events that occurred during the war. According to him, the activities of the present-day secessionists are likely headed toward war; hence, it was proactive to stop them beforehand with force.
“Those of us in the fields for 30 months, who went through the war, will treat them in the language they understand,” he said.
Twitter chose to delete the tweet after violating its abusive behaviour policy and several calls by Nigerians to take it down. Twitter also suspended the president’s account, leaving it in a “read-only mode” for 12 hours.
Following its decision, Mr Mohammed called out the social media giant by saying its decision was biased and said the president had a right to express his thoughts on events that affect the country. He also raised suspicion about the platform’s intention in the country. “Twitter may have its own rules; it’s not the universal rule. If Mr President anywhere in the world feels very bad and concerned about a situation, he is free to express such views… The mission of Twitter in Nigeria is very, very suspect,” he said.
In a retaliation act, Nigeria has proceeded to suspend the platform’s operations in the country. While Twitter doesn’t have any offices in the country, this announcement can still play out. And although there hasn’t been any social media ban yet, Nigeria’s current administration is no stranger to making ploys to restrict access to the internet, certain websites or social media. It was one of the tactics used during the EndSARS protests that rocked the country in October 2020. Given past events in other African countries where the internet has been restricted or banned in one form or another, this is an obvious ploy by the Nigerian government to double down on these tactics and use telecoms operators to repress free speech.
TechCrunch has reached out to Twitter for comments.
This is a developing story…
Biden Labor Secretary Marty Walsh charged into the white hot issue of the gig economy Thursday, asserting that many people working without benefits in the gig economy should be classified as employees instead.
In an interview with Reuters, Walsh said that the Department of Labor is “looking at” the gig economy, hinting that worker reclassification could be a priority in the Biden administration.
“… In a lot of cases gig workers should be classified as employees,” Walsh said. “In some cases they are treated respectfully and in some cases they are not and I think it has to be consistent across the board.”
Walsh also said that the labor department would be talking to companies that benefit from gig workers to ensure that non-employees at those companies have the same benefits that an “average employee” in the U.S. would have.
“These companies are making profits and revenue and I’m not [going to] begrudge anyone for that because that’s what we are about in America… but we also want to make sure that success trickles down to the worker,” Walsh said.
Walsh’s comments aren’t backed by federal action, yet anyway, but they still made major waves among tech companies that leverage non-employee labor. Uber and Lyft stock dipped on the news Thursday, along with Doordash.
In the interview, Walsh also touched on pandemic-related concerns about gig workers who lack unemployment insurance and health care through their employers. The federal government has picked up the slack during the pandemic with two major bills granting gig workers some benefits, but otherwise they’re largely without a safety net.
Reforming labor laws has been a tenet of Biden’s platform for some time and the president has been very vocal about bolstering worker protections and supporting organized labor. One section of then President-elect Biden’s transition site was devoted to expanding worker protections, calling the misclassification of employees as contract workers an “epidemic.”
Biden echoed his previous support for labor unions during a joint address to Congress Wednesday night, touting the Protecting the Right to Organize Act — legislation that would protect workers looking to form or join unions. That bill would also expand federal whistleblower protections.
“The middle class built this country,” Biden said. “And unions build the middle class.”
SpaceX is continuing its Starship spacecraft testing and development program apace, and as of this afternoon it has authorization from the U.S. Federal Aviation Administration (FAA) to conduct its next three test flights from its launch site in Boca Chica, Texas. Approvals for prior launch tests have been one-offs, but the FAA said in a statement that it’s approving these in a batch because “SpaceX is making few changes to the launch vehicle and relied on the FAA’s approved methodology to calculate the risk to the public.”
SpaceX is set to launch its SN15 test Starship as early as this week, with the condition that an FAA inspector be present at the time of the launch at the facility in Boca Chica. The regulator says that has sent an inspector, who is expected to arrive today, which could pave the way for a potential launch attempt in the next couple of days.
The last test flight SpaceX attempted from Boca Chica was the launch of SN11, which occurred at the end of March. That ended badly, after a mostly successful initial climb to an altitude of around 30,000 feet and flip maneuver, with an explosion triggered by an error in one of the Raptor engines used to control the powered landing of the vehicle.
In its statement about the authorization of the next three attempts, the FAA noted that the investigation into what happened with SN11 and its unfortunate ending is still in progress, but added that even so, the agency has determined any public safety concerns related to what went wrong have been alleviated.
The three-launch approval license includes flights of SN16 and SN17 as well as SN15, but the FAA noted that after the first flight, the next two might require additional “corrective action” prior to actually taking off, pending any new “mishap” occurring with the SN15 launch.
SpaceX CEO Elon Musk has at time criticized the FAA for not being flexible or responsive enough to the rapid pace of iteration and testing that SpaceX is pursuing in Starship’s development. On the other side, members of Congress have suggested that the FAA has perhaps not been as thorough as necessary in independently investigating earlier Starship testing mishaps. The administration contends that the lack of any ultimate resulting impact to public safety is indicative of the success of its program thus far, however.
With the increase of digital transacting over the past year, cybercriminals have been having a field day.
In 2020, complaints of suspected internet crime surged by 61%, to 791,790, according to the FBI’s 2020 Internet Crime Report. Those crimes — ranging from personal and corporate data breaches to credit card fraud, phishing and identity theft — cost victims more than $4.2 billion.
For companies like Sift — which aims to predict and prevent fraud online even more quickly than cybercriminals adopt new tactics — that increase in crime also led to an increase in business.
Last year, the San Francisco-based company assessed risk on more than $250 billion in transactions, double from what it did in 2019. The company has over several hundred customers, including Twitter, Airbnb, Twilio, DoorDash, Wayfair and McDonald’s, as well a global data network of 70 billion events per month.
To meet the surge in demand, Sift said today it has raised $50 million in a funding round that values the company at over $1 billion. Insight Partners led the financing, which included participation from Union Square Ventures and Stripes.
While the company would not reveal hard revenue figures, President and CEO Marc Olesen said that business has tripled since he joined the company in June 2018. Sift was founded out of Y Combinator in 2011, and has raised a total of $157 million over its lifetime.
The company’s “Digital Trust & Safety” platform aims to help merchants not only fight all types of internet fraud and abuse, but to also “reduce friction” for legitimate customers. There’s a fine line apparently between looking out for a merchant and upsetting a customer who is legitimately trying to conduct a transaction.
Sift uses machine learning and artificial intelligence to automatically surmise whether an attempted transaction or interaction with a business online is authentic or potentially problematic.
One of the things the company has discovered is that fraudsters are often not working alone.
“Fraud vectors are no longer siloed. They are highly innovative and often working in concert,” Olesen said. “We’ve uncovered a number of fraud rings.”
Olesen shared a couple of examples of how the company thwarted fraud incidents last year. One recently involved money laundering through donation sites where fraudsters tested stolen debit and credit cards through fake donation sites at guest checkout.
“By making small donations to themselves, they laundered that money and at the same tested the validity of the stolen cards so they could use it on another site with significantly higher purchases,” he said.
In another case, the company uncovered fraudsters using Telegram, a social media site, to make services available, such as food delivery, with stolen credentials.
The data that Sift has accumulated since its inception helps the company “act as the central nervous system for fraud teams.” Sift says that its models become more intelligent with every customer that it integrates.
Insight Partners Managing Director Jeff Lieberman, who is a Sift board member, said his firm initially invested in Sift in 2016 because even at that time, it was clear that online fraud was “rapidly growing.” It was growing not just in dollar amounts, he said, but in the number of methods cybercriminals used to steal from consumers and businesses.
“Sift has a novel approach to fighting fraud that combines massive data sets with machine learning, and it has a track record of proving its value for hundreds of online businesses,” he wrote via email.
When Olesen and the Sift team started the recent process of fundraising, Index actually approached them before they started talking to outside investors “because both the product and business fundamentals are so strong, and the growth opportunity is massive,” Lieberman added.
“With more businesses heavily investing in online channels, nearly every one of them needs a solution that can intelligently weed out fraud while ensuring a seamless experience for the 99% of transactions or actions that are legitimate,” he wrote.
The company plans to use its new capital primarily to expand its product portfolio and to scale its product, engineering and sales teams.
Sift also recently tapped Eu-Gene Sung — who has worked in financial leadership roles at Integral Ad Science, BSE Global and McCann — to serve as its CFO.
As to whether or not that meant an IPO is in Sift’s future, Olesen said that Sung’s experience of taking companies through a growth phase such as what Sift is experiencing would be valuable. The company is also for the first time looking to potentially do some M&A.
“When we think about expanding our portfolio, it’s really a buy/build partner approach,” Olesen said.
“If you build it, they will come” is a mantra that’s been repeated for more than three decades to embolden action. The line from “Field of Dreams” is a powerful saying, but I might add one word: “If you build it well, they will come.”
America’s Lifeline program, a monthly subsidy designed to help low-income families afford critical communications services, was created with the best intentions. The original goal was to achieve universal telephone service, but it has fallen far short of achieving its potential as the Federal Communications Commission has attempted to convert it to a broadband-centric program.
The FCC’s Universal Service Administrative Company estimates that only 26% of the families that are eligible for Lifeline currently participate in the program. That means that nearly three out of four low-income consumers are missing out on a benefit for which they qualify. But that doesn’t mean the program should be abandoned, as the Biden administration’s newly released infrastructure plan suggests.
Now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users.
Rather, now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users. Instead, the White House fact sheet on the plan recommends price controls for internet access services with a phaseout of subsidies for low-income subscribers. That is a flawed policy prescription.
If maintaining America’s global competitiveness, building broadband infrastructure in high-cost rural areas, and maintaining the nation’s rapid deployment of 5G wireless services are national goals, the government should not set prices for internet access.
Forcing artificially low prices in the quest for broadband affordability would leave internet service providers with insufficient revenues to continue to meet the nation’s communications infrastructure needs with robust innovation and investment.
Instead, targeted changes to the Lifeline program could dramatically increase its participation rate, helping to realize the goal of connecting Americans most in need with the phone and broadband services that in today’s world have become essential to employment, education, healthcare and access to government resources.
To start, Lifeline program participation should be made much easier. Today, individuals seeking the benefit must go through a process of self-enrollment. Implementing “coordinated enrollment” — through which individuals would automatically be enrolled in Lifeline when they qualify for certain other government assistance benefits, including SNAP (the Supplemental Nutrition Assistance Program, formerly known as food stamps) and Medicaid — would help to address the severe program underutilization.
Because multiple government programs serve the same constituency, a single qualification process for enrollment in all applicable programs would generate government efficiencies and reach Americans who are missing out.
Speaking before the American Enterprise Institute back in 2014, former FCC Commissioner Mignon Clyburn said, “In most states, to enroll in federal benefit programs administered by state agencies, consumers already must gather their income-related documentation, and for some programs, go through a face-to-face interview. Allowing customers to enroll in Lifeline at the same time as they apply for other government benefits would provide a better experience for consumers and streamline our efforts.”
Second, the use of the Lifeline benefit can be made far simpler for consumers if the subsidy is provided directly to them via an electronic Lifeline benefit card account — like the SNAP program’s electronic benefit transfer (EBT) card. Not only would a Lifeline benefit card make participation in the program more convenient, but low-income
Americans would then be able to shop among the various providers and select the carrier and the precise service(s) that best suits their needs. The flexibility of greater consumer choice would be an encouragement for more program sign-ups.
And, the current Lifeline subsidy amount — $9.25 per month — isn’t enough to pay for a broadband subscription. For the subsidy to be truly meaningful, an increase in the monthly benefit is needed. Last December, Congress passed the temporary Emergency Broadband Benefit to provide low-income Americans up to a $50 per month discount ($75 per month on tribal lands) to offset the cost of broadband connectivity during the pandemic. After the emergency benefit runs out, a monthly benefit adequate to defray the cost of a broadband subscription will be needed.
In order to support more than a $9.25 monthly benefit, the funding source for the Lifeline program must also be reimagined. Currently, the program relies on the FCC’s Universal Service Fund, which is financed through a “tax” on traditional long-distance and international telephone services.
As greater use is made of the web for voice communications, coupled with less use of traditional telephones, the tax rate has increased to compensate for the shrinking revenues associated with landline phone services. A decade ago, the tax, known as the “contribution factor,” was 15.5%, but it’s now more than double that at an unsustainable 33.4%. Without changes, the problem will only worsen.
It’s easy to see that the financing of a broadband benefit should no longer be tied to a dying technology. Instead, funding for the Lifeline program could come from a “tax” shared across the entire internet ecosystem, including the edge providers that depend on broadband to reach their customers, or from direct congressional appropriations for the Lifeline program.
These reforms are realistic and straightforward. Rather than burn the program down, it’s time to rebuild Lifeline to ensure that it fulfills its original intention and reaches America’s neediest.
A court in Houston has authorized an FBI operation to “copy and remove” backdoors from hundreds of Microsoft Exchange email servers in the United States, months after hackers used four previously undiscovered vulnerabilities to attack thousands of networks.
The Justice Department announced the operation on Tuesday, which it described as “successful.”
In March, Microsoft discovered a new China state-sponsored hacking group — Hafnium — targeting Exchange servers run from company networks. The four vulnerabilities when chained together allowed the hackers to break into a vulnerable Exchange server and steal its contents. Microsoft fixed the vulnerabilities but the patches did not close the backdoors from the servers that had already been breached. Within days, other hacking groups began hitting vulnerable servers with the same flaws to deploy ransomware.
The number of infected servers dropped as patches were applied. But hundreds of Exchange servers remained vulnerable because the backdoors are difficult to find and eliminate, the Justice Department said in a statement.
“This operation removed one early hacking group’s remaining web shells which could have been used to maintain and escalate persistent, unauthorized access to U.S. networks,” the statement said. “The FBI conducted the removal by issuing a command through the web shell to the server, which was designed to cause the server to delete only the web shell (identified by its unique file path).”
The FBI said it’s attempting to inform owners via email of servers from which it removed the backdoors.
Assistant attorney general John C. Demers said the operation “demonstrates the Department’s commitment to disrupt hacking activity using all of our legal tools, not just prosecutions.”
The Justice Department also said the operation only removed the backdoors, but did not patch the vulnerabilities exploited by the hackers to begin with or remove any malware left behind.
It’s believed this is the first known case of the FBI effectively cleaning up private networks following a cyberattack. In 2016, the Supreme Court moved to allow U.S. judges to issue search and seizure warrants outside of their district. Critics opposed the move at the time, fearing the FBI could ask a friendly court to authorized cyber-operations for anywhere in the world.
Other countries, like France, have used similar powers before to hijack a botnet and remotely shutting it down.
Neither the FBI nor the Justice Department commented by press time.
There are about to be a lot of antitrust bills taking aim at Big Tech, and here’s one more. Senator Josh Hawley (R-MO) rolled out a new bill this week that would take some severe measures to rein in Big Tech’s power, blocking mergers and acquisitions outright.
The “Trust-Busting for the Twenty-First Century Act” would ban any acquisitions by companies with a market cap of more than $100 billion, including vertical mergers. The bill also proposes changes that would dramatically heighten the financial pain for companies caught engaging in anti-competitive behavior, forcing any company that loses an antirust suit to forfeit profits made through those business practices.
At its core, Hawley’s legislation would snip some of the red tape around antitrust enforcement by amending the Sherman Act, which made monopolies illegal, and the Clayton Act, which expanded the scope of illegal anti-competitive behavior. The idea is to make it easier for the FTC and other regulators to deem a company’s behavior anti-competitive — a key criticism of the outdated antitrust rules that haven’t kept pace with the realities of the tech industry.
The bill isn’t likely to get too far in a Democratic Senate, but it’s not insignificant. Sen. Amy Klobuchar (D-MN), who chairs the Senate’s antitrust subcommittee, proposed legislation earlier this year that would also create barriers for dominant companies with a habit of scooping up their competitors. Klobuchar’s own ideas for curtailing Big Tech’s power similarly focus on reforming the antitrust laws that have shaped U.S. business for more than a century.
The Republican bill may have some overlap with Democratic proposals, but it still hits some familiar notes from the Trump era of hyperpartisan Big Tech criticism. Hawley slams “woke mega-corporations” in Silicon Valley for exercising too much power over the information and products that Americans consume. While Democrats naturally don’t share that critique, Hawley’s bill makes it clear that antitrust reform targeting Big Tech is one policy area where both political parties could align on the ends, even if they don’t see eye to eye on the why.
Hawley’s bill is the latest, but it won’t be the last. Rep. David Cicilline (D-RI), who spearheads tech antitrust efforts in the House, previously announced his own plans to introduce a flurry of antitrust reform bills rather than one sweeping piece of legislation. Those bills, which will be more narrowly targeted to make them difficult for tech lobbyists to defeat, are due out in May.
Building, scaling and launching new tools and products is the lifeblood of the technology sector. When we consider these concepts today, many think of Big Tech and flashy startups, known for their industry dominance or new technologies that impact our everyday lives. But long before garages and dorm rooms became decentralized hubs for these innovations, local and state governments, along with many agencies within the federal government, pioneered tech products with the goal of improving the lives of millions.
Long before garages and dorm rooms became decentralized hubs for innovation, local and state governments, along with many agencies within the federal government, pioneered tech products with the goal of improving the lives of millions.
As an industry, we’ve developed a notion that working in government, the place where the groundwork was laid for the digital assistants we use every day, is now far less appealing than working in the private sector. The immense salary differential is often cited as the overwhelming reason workers prefer to work in the private sphere.
But the hard truth is the private sector brings far more value than just higher compensation to employees. Look no further than the boom in the tech sector during the pandemic to understand why it’s so attractive. A company like Zoom, already established and successful in its own right for years, found itself in a situation where it had to serve an exponentially growing and diverse user base in a short period of time. It quickly confronted a slew of infrastructure and user experience pivots on its way to becoming a staple of work-from-home culture — and succeeded.
That innate ability to work fast to deliver for consumers and innovate at what feels like a moment’s notice is what really draws talent. Compare that to the government’s tech environment, where decreased funding and partisan oversight slow the pace of work, or, worse, can get in the way of exploring or implementing new ideas entirely.
One look (literally, see our graph below) at the trends around R&D spending in the private and government sectors also paints a clear picture of where future innovations will come from if we don’t change the equation.
Image Credits: Josh Mendelsohn/Hangar
Look no further than the U.S. government’s own (now defunct) Office of Technology Assessment. The agency aimed to provide a thorough analysis of burgeoning issues in science and technology, exposing many public services to a new age of innovation and implementation. Amid a period of downsizing by a newly Republican-led Congress, the OTA was defunded in 1995 with a peak annual budget of just $35.1 million (adjusted for 2019 dollars). The authoritative body on the importance of technology to the government was deemed duplicative and unnecessary. Despite numerous calls for its reinstatement, it has since remained shuttered.
Despite dwindling public sector investment and lackluster political action, the problems that technology is poised to help solve haven’t gone away or even eased up.
From the COVID pandemic to worsening natural disasters and growing societal inequities, public leaders have a responsibility to solve the pressing issues we face today. That responsibility should breed a desire to continuously iterate for the sake of constituents and quality of life, much in the same way private tech caters to the product, user and bottom line.
My own experiences in government have shaped my career and approach to building new technologies more than my time in Silicon Valley. There are plenty of tangible parallels to the private sector that can attract driven and passionate tech workers, but the responsibility of giving government work realistic consideration doesn’t just fall at the feet of talent. The governments that we depend on must invest more capital and pay closer attention to the tech community.
Tech workers want an environment where they can thrive and get to see their work in action, whoever the end user may be. They don’t want to feel hamstrung by the threat of decreased funding or the red tape that comes as a result of government partisanship. Replicating the unimpeded focus of Silicon Valley’s brightest examples is a must if we’re serious about drawing talented individuals into government or public-sector-focused work.
A great example of these ideas in action is one of the most beloved government agencies, NASA. Its continued funding has produced technologies developed for space exploration that are now commonplace in our lives, such as scratch-resistant lenses, memory foam and water filters. These use cases came much later on, only after millions of dollars were invested without knowing what would result.
NASA has continued to bolster its ability to stay nimble and evolve at a rapid pace by partnering with private companies. For talent in the tech sphere, the ability to leverage outside resources in this way, without compromising the product or work, is a boon for ideation and iteration.
One can also point to the agency when considering the importance of keeping technology research and innovation as apolitical as possible. It’s one of the few widely known public entities to prosper on the back of bipartisan support. Unfortunately, politicians typically do all of us a disservice, particularly tech workers in government, when they too closely connect themselves or their parties to a particular program or platform. It hinders innovation — and the ensuing mudslinging can detract from talented individuals jumping into government service.
There is no shortage of extremely capable tech workers who want to help solve the biggest issues facing society. Will we give them the legitimate space and opportunity to conquer those problems? There’s been some indication that we can. These ambitious and forward-looking efforts matter today more than ever and show all of us in the tech ecosystem that there’s a place in government for tech talent to grow and flourish.
The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.
While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.
That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.
Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.
The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.
Not only is @Facebook past the indemnification period of the FTC settlement (June 12 2019), they also may have violated the terms of the settlement requiring them to report breaches of covered information (ht @JustinBrookman ) https://t.co/182LEf4rNO pic.twitter.com/utCnQ4USHI
— ashkan soltani (@ashk4n) April 7, 2021
Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.
Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.
Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.
This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.
In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.
In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.
As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).
Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.
Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.
Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.
And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )
Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.
The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.
This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.
Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.
Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.
In short, it’s the Cambridge Analytica scandal all over again.
Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)
We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.
Then it told us it would not be commenting on its communications with regulators.
Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.
Yesterday Facebook also said it does not have plans to notify users either.
Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.
SpaceX has launched another batch of Starlink satellites, keeping up its rapid pace of launches for the broadband constellation it’s deploying in low Earth orbit. This now makes 300 Starlink satellites launched since March 4, with 60 on each of five flights between then and now.
The most recent launch before this one happened on March 24, with prior flights on March 14, March 11 and March 4 , respectively. That pace is intentionally fast, since SpaceX has said it aims to launch a total of 1,500 Starlink satellites over the course of this calendar year. Before that especially busy month, SpaceX also flew four other Starlink missions, including a shared ride on SpaceX’s first dedicated rideshare mission that also carried satellites for other customers.
In total, SpaceX has now launched 1,443 satellites for its Starlink constellation. That doesn’t reflect the total number of satellites on orbit, however, as a handful of those earlier satellites have been deorbited as planned. In total, the eventual planned sizer fo the constellation is expected to include up to 42,000 spacecraft based on current FCC frequency spectrum filings.
SpaceX recently signed a new agreement with NASA that outlines how the two organizations will avoid close approach or collision events between their respective spacecraft. NASA has measures it requires all launchers to follow in order to avoid these kinds of incidents, but the scale and frequency of SpaceX’s Starlink missions necessitated an additional, more extensive agreement.
This launch also included a landing of the Falcon 9 booster used, its seventh so far. The booster touched down as intended on SpaceX’s floating landing pad in the Atlantic Ocean, and will be refurbished for another potential reuse. SpaceX is also going to be looking to recover its fairing halves at sea, which are the two cargo covering shields that encase the satellites during take-off. The company actually just decommissioned two ships it had used to try to catch these out of mid-air as they fell slowed by parachutes, but it’s still looking to retrieve them from the ocean after splashdown for re-use.
As governments scrambled to lock down their populations after the COVID-19 pandemic was declared last March, some countries had plans underway to reopen. By June, Jamaica became one of the first countries to open its borders.
Tourism represents about one-fifth of Jamaica’s economy. In 2019 alone, four million travelers visited Jamaica, bringing thousands of jobs to its three million residents. But as COVID-19 stretched into the summer, Jamaica’s economy was in free fall, and tourism was its only way back — even if that meant at the expense of public health.
The Jamaican government contracted with Amber Group, a technology company headquartered in Kingston, to build a border entry system allowing residents and travelers back onto the island. The system was named JamCOVID and was rolled out as an app and a website to allow visitors to get screened before they arrive. To cross the border, travelers had to upload a negative COVID-19 test result to JamCOVID before boarding their flight from high-risk countries, including the United States.
Amber Group’s CEO Dushyant Savadia boasted that his company developed JamCOVID in “three days” and that it effectively donated the system to the Jamaican government, which in turn pays Amber Group for additional features and customizations. The rollout appeared to be a success, and Amber Group later secured contracts to roll out its border entry system to at least four other Caribbean islands.
But last month TechCrunch revealed that JamCOVID exposed immigration documents, passport numbers, and COVID-19 lab test results on close to half a million travelers — including many Americans — who visited the island over the past year. Amber Group had set the access to the JamCOVID cloud server to public, allowing anyone to access its data from their web browser.
Whether the data exposure was caused by human error or negligence, it was an embarrassing mistake for a technology company — and, by extension, the Jamaican government — to make.
And that might have been the end of it. Instead, the government’s response became the story.
By the end of the first wave of coronavirus, contact tracing apps were still in their infancy and few governments had plans in place to screen travelers as they arrived at their borders. It was a scramble for governments to build or acquire technology to understand the spread of the virus.
As part of an investigation into a broad range of these COVID-19 apps and services, TechCrunch found that JamCOVID was storing data on an exposed, passwordless server.
This wasn’t the first time TechCrunch found security flaws or exposed data through our reporting. It also was not the first pandemic-related security scare. Israeli spyware maker NSO Group left real location data on an unprotected server that it used for demonstrating its new contact tracing system. Norway was one of the first countries with a contact tracing app, but pulled it after the country’s privacy authority found the continuous tracking of citizens’ location was a privacy risk.
Just as we have with any other story, we contacted who we thought was the server’s owner. We alerted Jamaica’s Ministry of Health to the data exposure on the weekend of February 13. But after we provided specific details of the exposure to ministry spokesperson Stephen Davidson, we did not hear back. Two days later, the data was still exposed.
After we spoke to two American travelers whose data was spilling from the server, we narrowed down the owner of the server to Amber Group. We contacted its chief executive Savadia on February 16, who acknowledged the email but did not comment, and the server was secured about an hour later.
We ran our story that afternoon. After we published, the Jamaican government issued a statement claiming the lapse was “discovered on February 16” and was “immediately rectified,” neither of which were true.
Instead, the government responded by launching a criminal investigation into whether there was any “unauthorized” access to the unprotected data that led to our first story, which we perceived to be a thinly veiled threat directed at this publication. The government said it had contacted its overseas law enforcement partners.
When reached, a spokesperson for the FBI declined to say whether the Jamaican government had contacted the agency.
Things didn’t get much better for JamCOVID. In the days that followed the first story, the government engaged a cloud and cybersecurity consultant, Escala 24×7, to assess JamCOVID’s security. The results were not disclosed, but the company said it was confident there was “no current vulnerability” in JamCOVID. Amber Group also said that the lapse was a “completely isolated occurrence.”
A week went by and TechCrunch alerted Amber Group to two more security lapses. After the attention from the first report, a security researcher who saw the news of the first lapse found exposed private keys and passwords for JamCOVID’s servers and databases hidden on its website, and a third lapse that spilled quarantine orders for more than half a million travelers.
Amber Group and the government claimed it faced “cyberattacks, hacking and mischievous players.” In reality, the app was just not that secure.
The security lapses come at a politically inconvenient time for the Jamaican government, as it attempts to launch a national identification system, or NIDS, for the second time. NIDS will store biographic data on Jamaican nationals, including their biometrics, such as their fingerprints.
The repeat effort comes two years after the government’s first law was struck down by Jamaica’s High Court as unconstitutional.
Critics have cited the JamCOVID security lapses as a reason to drop the proposed national database. A coalition of privacy and rights groups cited the recent issues with JamCOVID for why a national database is “potentially dangerous for Jamaicans’ privacy and security.” A spokesperson for Jamaica’s opposition party told local media that there “wasn’t much confidence in NIDS in the first place.”
It’s been more than a month since we published the first story and there are many unanswered questions, including how Amber Group secured the contract to build and run JamCOVID, how the cloud server became exposed, and if security testing was conducted before its launch.
TechCrunch emailed both the Jamaican prime minister’s office and Jamaica’s national security minister Matthew Samuda to ask how much, if anything, the government donated or paid to Amber Group to run JamCOVID and what security requirements, if any, were agreed upon for JamCOVID. We did not get a response.
Amber Group also has not said how much it has earned from its government contracts. Amber Group’s Savadia declined to disclose the value of the contracts to one local newspaper. Savadia did not respond to our emails with questions about its contracts.
Following the second security lapse, Jamaica’s opposition party demanded that the prime minister release the contracts that govern the agreement between the government and Amber Group. Prime Minister Andrew Holness said at a press conference that the public “should know” about government contracts but warned “legal hurdles” may prevent disclosure, such as for national security reasons or when “sensitive trade and commercial information” might be disclosed.
That came days after local newspaper The Jamaica Gleaner had a request to obtain contracts revealing the salaries state officials denied by the government under a legal clause that prevents the disclosure of an individual’s private affairs. Critics argue that taxpayers have a right to know how much government officials are paid from public funds.
Jamaica’s opposition party also asked what was done to notify victims.
Government minister Samuda initially downplayed the security lapse, claiming just 700 people were affected. We scoured social media for proof but found nothing. To date, we’ve found no evidence that the Jamaican government ever informed travelers of the security incident — either the hundreds of thousands of affected travelers whose information was exposed, or the 700 people that the government claimed it notified but has not publicly released.
TechCrunch emailed the minister to request a copy of the notice that the government allegedly sent to victims, but we did not receive a response. We also asked Amber Group and Jamaica’s prime minister’s office for comment. We did not hear back.
Many of the victims of the security lapse are from the United States. Neither of the two Americans we spoke to in our first report were notified of the breach.
Spokespeople for the attorneys general of New York and Florida, whose residents’ information was exposed, told TechCrunch that they had not heard from either the Jamaican government or the contractor, despite state laws requiring data breaches to be disclosed.
The reopening of Jamaica’s borders came at a cost. The island saw over a hundred new cases of COVID-19 in the month that followed, the majority arriving from the United States. From June to August, the number of new coronavirus cases went from tens to dozens to hundreds each day.
To date, Jamaica has reported over 39,500 cases and 600 deaths caused by the pandemic.
Prime Minister Holness reflected on the decision to reopen its borders last month in parliament to announce the country’s annual budget. He said the country’s economic decline last was “driven by a massive 70% contraction in our tourist industry.” More than 525,000 travelers — both residents and tourists — have arrived in Jamaica since the borders opened, Holness said, a figure slightly more than the number of travelers’ records found on the exposed JamCOVID server in February.
Holness defended reopening the country’s borders.
“Had we not done this the fall out in tourism revenues would have been 100% instead of 75%, there would be no recovery in employment, our balance of payment deficit would have worsened, overall government revenues would have been threatened, and there would be no argument to be made about spending more,” he said.
Both the Jamaican government and Amber Group benefited from opening the country’s borders. The government wanted to revive its falling economy, and Amber Group enriched its business with fresh government contracts. But neither paid enough attention to cybersecurity, and victims of their negligence deserve to know why.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
It seems safe to say that our honeymoon with Big Tech is officially over.
After years of questionable data-handling procedures, arbitrary content management policies and outright anti-competitive practices, it is only fair that we take a moment to rethink our relationship with the industry.
Sadly, most of the ideas that have gathered mainstream attention — such as the calls to break up Big Tech — have been knee-jerk responses that smack more of retributionist fantasies than sound economic thinking.
Instead of chasing sensationalist non-starters and zero-sum solutions, we should be focused on ensuring that big tech grows better as it grows bigger by establishing a level playing field for startups’ and competitors’ proprietary digital markets.
We can find inspiration on how to do just that by taking a look at how 20th-century lawmakers reined in the railroad monopolies, which similarly turned from darlings of industry to destructive forces of stagnation.
More than a century ago, a familiar story of a nation coming to terms with the unanticipated effects of technological disruption was unfolding across a rapidly industrializing United States.
While the first full-scale steam locomotive debuted in 1804, it took until 1868 for more powerful and cargo-friendly American-style locomotives to be introduced.
The more efficient and cargo-friendly locomotives caught on like wildfire, and soon steel and iron pierced through mountains and leaped over gushing rivers to connect Americans from coast to coast.
Soon, railroad mileage tripled and a whopping 77% of all intercity traffic and 98% of passenger business would be running on rails, ushering in an era of cost-efficient transcontinental travel that would recast the economic fortunes of the entire country.
As is often the case with disruptive technologies, early success would come with a heavy human cost.
From the very beginning, abuse and exploitation ran rampant in the railroad industry, with up to 3% of the labor force suffering injuries or dying during the course of an average year.
Railroad trust owners soon became key constituents of the widely maligned group of businessmen colloquially known as robber barons, whose corporations devoured everything in their path and made life difficult for competitors and new entrants in particular.
The railroad proprietors achieved this by maintaining carefully constructed walled gardens, allowing them to run competitors into the ground by means of extortion, exclusion and everything in between.
While these methods proved wildly successful for railroad owners, the rest of society languished under stifled competition and an utter lack of concern for consumers’ interests.
Learning from past experiences certainly doesn’t seem to be humankind’s strong suit.
In fact, most of our concerns with the tech industry are mirror images of the objections 20th-century Americans had against the railroad trusts.
Similar to the robber barons, Alphabet, Amazon, Apple, Facebook, Twitter, et al., have come to dominate the major thoroughfares of trade in a fashion that leaves little space for competitors and startups.
By instating double-digit platform fees, establishing strict limitations on payment processing protocols, and jealously hoarding proprietary data and APIs, Big Tech has erected artificial barriers to entry that make replicating their success all but impossible.
Over the past years, tech giants have also taken to cannibalizing third-party solutions by providing private-label versions — à la AmazonBasics — to the point where Big Tech’s clients are finding themselves undercut and outplayed by the platform-holders themselves.
Given the above, it is not surprising that the pace at which tech startups are created in the US has been declining for years.
In fact, VC veterans such as Albert Wenger have called attention to the “kill zone” around Big Tech for years, and if we are to reinvigorate the competitive fringe around our large tech conglomerates, something has to be done fast.
The 20th-century playbook for taming monopolistic railroad trusts offers several helpful lessons for dealing with Big Tech.
For first steps, Congress created the Interstate Commerce Commission (ICC) in 1887 and tasked it with administering reasonable and just rates for access to proprietary railroad networks.
Due to partisan politicking, the ICC proved relatively toothless, however. It wasn’t until Congress passed the 1906 Hepburn Act, which separated the function of transportation from the ownership of the goods being shipped, that we started seeing true progress.
By disallowing self-dealing and double-dipping in proprietary platforms, Congress succeeded in opening up access on equal terms both to existing competitors and startups alike, making a once-unnavigable thicket of exploitative practices into the metallic backbone of American prosperity that we know today.
This could never have been achieved by simply breaking the railroad trusts into smaller pieces.
In fact, when it comes to platforms and networks, bigger often is better for everyone involved thanks to network effects and several other factors that conspire against smaller platforms.
Most importantly, when access and interoperability rules are done right, bigger platforms can sustain wider and wider constellations of startups and third parties, helping us grow our economic pie instead of shrinking it.
In our post-pandemic economy, our attention should be in helping tech platforms grow better as they grow bigger instead of cutting them down to size.
Ensuring that startups and competitors can access these platforms on equitable terms and at fair prices is a necessary first step.
There are numerous other tangible actions policymakers can take today. For example, rewriting the rules on data portability, pushing for wider standardization and interoperability across platforms, and reintroducing net neutrality would go a long way in addressing what ails the industry today.
In the end, all of us would stand to benefit from a robust fringe of startups and competitors that thrive on the shoulders of giants and the platforms they have made.