Originally announced in June, changes to Apple’s App Store policies on its Sign in with Apple service and the rules around children’s app categories are being tweaked. New apps must comply right away with the tweaked terms, but existing apps will have until early 2020 to comply with the new rules.
The changes announced at Apple’s developer conference in the summer were significant, and raised concerns among developers that the rules could handicap their ability to do business in a universe that, frankly, offers tough alternatives to ad-based revenue for children’s apps.
In a short interview with TechCrunch, Apple’s Phil Schiller said that they had spent time with developers, analytics companies and advertising services to hear what they had to say about the proposals and have made some updates.
The changes are garnering some strong statements of support from advocacy groups and advertising providers for children’s apps that were pre-briefed on the tweaks. The changes will show up as of this morning in Apple’s developer guidelines.
“As we got closer to implementation we spent more time with developers, analytics companies and advertising companies,” said Schiller. “Some of them are really forward thinking and have good ideas and are trying to be leaders in this space too.”
With their feedback, Schiller said, they’ve updated the guidelines to allow them to be more applicable to a broader number of scenarios. The goal, he said, was to make the guidelines easy enough for developers to adopt while being supportive of sensible policies that parents could buy into. These additional guidelines, especially around the Kids app category, says Schiller, outline scenarios that may not be addressed by the Children’s Online Privacy Protection Act (COPPA) or GDPR regulations.
There are two main updates.
The first area that is getting further tweaking is the Kids terms. Rule sections 1.3 and 5.1.4 specifically are being adjusted after Apple spoke with developers and providers of ad and analytics services about their concerns over the past few months.
Both of those rules are being updated to add more nuance to their language around third-party services like ads and analytics. In June, Apple announced a very hard-line version of these rule updates that essentially outlawed any third-party ads or analytics software and prohibited any data transmission to third-parties. The new rules offer some opportunities for developers to continue to integrate these into their apps, but also sets out explicit constraints for them.
The big changes come in section 1.3 surrounding data safety in the Kids category. Apple has removed the explicit restriction on including any third-party advertising or analytics. This was the huge hammer that developers saw heading towards their business models.
Instead, Apple has laid out a much more nuanced proposal for app developers. Specifically, it says these apps should not include analytics or ads from third parties, while implicitly acknowledging that there are ways to provide these services as well as practicing data safety on the App Store.
Apple says that in limited cases, third-party analytics may be permitted as long as apps in the Kids category do not send personal identifiable information or any device fingerprinting information to third parties. This includes transmitting the IDFA (the device ID for advertisers), name, date of birth, email address, location or any other personally identifiable information.
Third-party contextual ads may be allowed but only if those companies providing the ads have publicly documented practices and policies and also offer human review of ad creatives. That certainly limits the options, including most offerings from programmatic services.
Rule 5.1.4 centers on data handling in kids apps. In addition to complying with COPPA, GDPR and other local regulations, Apple sets out some explicit guard rails.
First, the language on third-party ads and analytics has been changed from may not to should not. Apple is discouraging their use, but acknowledges that “in limited cases” third-party analytics and advertising may be permitted if it adheres to the new rules set out in guideline 1.3.
The explicit prohibition on transmitting any data to third parties from apps in the Kids category has been removed. Once again, this was the big bad bullet that every children’s app maker was paying attention to.
An additional clause reminds developers not to use terms like “for kids” and “for children” in app metadata for apps outside of the Kids category on the App Store.
SuperAwesome is a company that provides services like safe ad serving to kids apps. CEO Dylan Collins was initially critical of Apple’s proposed changes, noting that killing off all third-party apps could decimate the kids app category.
“Apple are clearly very serious about setting the standard for kids apps and digital services,” Collins said in a statement to TechCrunch after reviewing the new rules Apple is publishing. “They’ve spent a lot of time working with developers and kidtech providers to ensure that policies and tools are set to create great kids digital experiences while also ensuring their digital privacy and safety. This is the model for all other technology platforms to follow.”
All new apps must adhere to the guidelines. Existing apps have been given an additional six months to live in their current form but must comply by March 3, 2020.
“We commend Apple for taking real steps to protect children’s privacy and ensure that kids will not be targets for data-driven, personalized marketing,” said Josh Golin, Executive Director of Campaign for Commercial-Free Childhood. “Apple rightly recognizes that a child’s personal identifiable information should never be shared with marketers or other third parties. We also appreciate that Apple made these changes on its own accord, without being dragged to the table by regulators.”
The CCFC had a major win recently when the FTC announced a $170M fine against YouTube for violations of COPPA.
The second set of updates has to do with Apple’s Sign in with Apple service.
Sign in with Apple is a sign-in service that can be offered by an app developer to instantly create an account that is handled by Apple with additional privacy for the user. We’ve gone over the offering extensively here, but there are some clarifications and policy additions in the new guidelines.
Sign in with Apple is being required to be offered by Apple if your app exclusively offers third-party or social log ins like those from Twitter, Google, LinkedIn, Amazon or Facebook. It is not required if users sign in with a unique account created in the app, with say an email and password.
But some additional clarifications have been added for additional scenarios. Sign in with Apple will not be required in the following conditions:
Most of these were sort of assumed to be true but were not initially clear in June. The last one, especially, was one that I was interested in seeing play out. This scenario applies to, for instance, the Gmail app for iOS, as well as apps like Tweetbot, which log in via Twitter because all they do is display Twitter.
Starting today, new apps submitted to the store that don’t meet any of the above requirements must offer Sign in with Apple to users. Current apps and app updates have until April 2020 to comply.
Both of these tweaks come after developers and other app makers expressed concern and reports noted the abruptness and strictness of the changes in the context of the ever-swirling anti-trust debate surrounding big tech. Apple continues to walk a tightrope with the App Store where they flex muscles in an effort to enhance data protections for users while simultaneously trying to appear as egalitarian as possible in order to avoid regulatory scrutiny.
SpaceX is taking the steps necessary to begin test flying the orbital-class version of its Starship spacecraft, with new documents filed by the company (via Teslarati) with the FCC seeking necessary permissions for it to communicate with the prototype while it’s in flight.
The company filed documents with the U.S. regulatory agency this week in advance of the flight, which lists a max altitude of 74,000 feet, which is a far cry from Earth orbit but still a much greater distance vs. the 500 or so feet achieved by the squat ‘Starhopper’ demonstration and test vehicle that SpaceX has been actively operating in preparation for Starship .
Getting ready for flight of orbit-class Starship design https://t.co/CtXtq522ia
SpaceX CEO Elon Musk confirmed that prep was underway via tweet. Musk has previously said that he hoped to follow the Starhopper’s most recent and final successful test quickly with tests of the full-scale vehicle. Like with that low-altitude test, SpaceX will aim to launch and land the Starhopper, with touch down planned just a short distance away.
Assembly and construction of the Starship prototype looks to be well underway, and Musk recently teased a Starship update event for September 28, which is likely when we’ll see this prototype assembled and ready to go ahead of its planned October first test flight window.
Starship is the next generation of SpaceX spacecraft, designed for maximum reusability, and with the aim of creating one vehicle that can serve the needs of current and future customers, eventually replacing both Falcon 9 and Falcon Heavy. Starship is also a key ingredient in Musk’s ambitious plan to reach and establish a continuing human presence on Mars.
Fifty attorneys general are pushing forward with an antitrust investigation against Google, led by the Texas state attorney general, Ken Paxton.
In an announcement on the steps of the U.S. Supreme Court building, Paxton and a gathering of attorneys general said that the focus of the investigation would be on Google’s advertising practices, but that other points of inquiry could be included in the investigation.
The investigation into Google comes as big technology companies find themselves increasingly under the regulatory microscope for everything from anticompetitive business practices to violations of users’ privacy and security to accusations of political bias.
Last week, the New York state attorney general launched an investigation into Facebook.
“Google’s control over nearly every aspect of our lives has placed the company at the center of our digital economy. But it doesn’t take a search engine to understand that unchecked corporate power shouldn’t eclipse consumers’ rights,” said New York Attorney General Letitia James, in a statement. “That is why New York has joined this bipartisan investigation of Google to determine whether the company has achieved or maintained its dominance through anticompetitive conduct. As with the Facebook investigation we are leading, we will use every investigative tool at our disposal in the Google investigation to ensure the truth is exposed.”
For those trying to keep score on antitrust:
The FTC is investigating Facebook.
The Department of Justice is investigating Apple, Google and Amazon.
The DoJ is also investigating ALL of Big Tech.
State attorneys general set to announce inquiry expected to focus on Google
— Jeremy C. Owens (@jowens510) September 9, 2019
It’s perfectly natural for a red-blooded American to, once they have procured their first real drone, experiment with attaching a flame thrower to it. But it turns out that this harmless hobby is frowned upon by the biggest buzzkills in the world… the feds.
Yes, the FAA has gone and published a notice that drones and weapons are “A Dangerous Mix.” Well, that’s arguable. But they’re the authority here, so we have to hear them out.
“Perhaps you’ve seen online photos and videos of drones with attached guns, bombs, fireworks, flamethrowers, and other dangerous items. Do not consider attaching any items such as these to a drone because operating a drone with such an item may result in significant harm to a person and to your bank account.”
They’re not joking around with the fines, either. You could be hit with one as big as $25,000 for violating the FAA rules. Especially if you put your attack drone on YouTube.
That’s the ThrowFlame TF-19, by the way. TechCrunch in no way recommends or endorses this extremely awesome device.
Of course, you may consider yourself an exception — perhaps you are a defense contractor working on hunter-killers, or a filmmaker who has to simulate a nightmare drone-dominated future. Or maybe you just promise to be extra careful.
If so, you can apply to the FAA through the proper channels to receive authorization for your drone-weaponizing operation. Of course, as with all other victimless crimes, if no one sees it, did a crime really occur? The FAA would no doubt say yes, absolutely, no question. So yeah, probably you shouldn’t do that.
The Justice Department have indicted dozens of individuals accused of their involvement in a massive business email scam and money laundering scheme.
Thom Mrozek, a spokesperson for the U.S. Attorneys Office for the Central District of California, confirmed more than a dozen individuals had been arrested during raids on Thursday — mostly in the Los Angeles area. A total of 80 defendants are allegedly involved in the scheme.
News of the early-morning raids were first reported by ABC7 in Los Angeles.
The 145-page indictment, unsealed Thursday, said the 80 named individuals are charged with conspiracy to commit mail and bank fraud, as well as aggravated identity theft and money laundering.
Most of the individuals alleged to be involved in the scheme are based in Nigeria, said the spokesperson.
But it’s not immediately known if the Nigerian nationals will be extradited to the U.S., however a treaty exists between the two nations making extraditions possible.
U.S. Attorney Nicola Hanna said the case was part of an ongoing effort to protect citizens and businesses from email scams.
“Today, we have taken a major step to disrupt criminal networks that use [business email scam] schemes, romance scams and other frauds to fleece victims,” he said. “This indictment sends a message that we will identify perpetrators — no matter where they reside — and we will cut off the flow of ill-gotten gains.”
These business email compromise scams rely partly on deception and in some cases hacking. Scammers send specially crafted spearphishing emails to their targets in order to trick them into turning over sensitive information about the company, such as sending employee W-2 tax documents so scammers can generate fraudulent refunds, or tricking an employee into making wire transfers to bank accounts controlled by the scammers. More often than not, the scammers use spoofing techniques to impersonate a senior executive over email to trick the unsuspecting victim, or hack into the email account of the person they are impersonating.
The FBI says these impersonation attacks have cost consumers and businesses more than $3 billion since 2015.
Valentine Iro, 31, and Chukwudi Christogunus Igbokwe, 38, both Nigerian nationals and residents of California, are accused of running the operation, said prosecutors.
The alleged fraudsters are accused of carrying out several hundred “overt” acts of fraud against over a dozen victims, generating millions of dollars worth of fraud over several months. In some cases the fraudsters would hack into the email accounts of the person they were trying to impersonate to try to trick a victim into wiring money from a business into the fraudster’s bank account.
Iro and Igbokwe were “essentially brokers” of fraudulent bank accounts, prosecutors allege, by fielding requests for bank account information and laundering the money obtained from victims. The two lead defendants are accused of taking a cut of the stolen money.
Several bank accounts run by the fraudsters contained over $40 million in stolen funds.
The FBI said the agency has seem a large increase in the number of business email scams in the past year targeting small and large businesses, as well as non-profits.
The state attorneys in more than a dozen states are preparing to begin an antitrust investigation of the tech giants, the Wall Street Journal and the New York Times reported Monday, putting the spotlight on an industry that is already facing federal scrutiny.
The bipartisan group of attorneys from as many as 20 states is expected to formally launch a probe as soon as next month to assess whether tech companies are using their dominant market position to hurt competition, WSJ reported.
If true, the move follows the Department of Justice, which last month announced its own antitrust review of how online platforms scaled to their gigantic sizes and whether they are using their power to curb competition and stifle innovation. Earlier this year, the Federal Trade Commission formed a task force to monitor competition among tech platforms.
It won’t be unprecedented for a group of states to look at a technology giant. In 1998, 20 states joined the Justice Department in suing Microsoft . The states could play a key role in building evidence and garnering public support for major investigations.
Apple and Google pointed the Times to their previous official statements on the matter, in which they have argued that they have been vastly innovative and created an environment that has benefited the consumers. Amazon and Facebook did not comment.
Also on Monday, Joseph Simons, the chairman of FTC, warned that Facebook’s planned effort to integrate Instagram and WhatsApp could stymie any attempt by the agency to break up the social media giant.
“If they’re maintaining separate business structures and infrastructure, it’s much easier to have a divestiture in that circumstance than in where they’re completely enmeshed and all the eggs are scrambled,” Simons told the Financial Times.
The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.
The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.
Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.
The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.
Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.
In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.
As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.
Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .
At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.
The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.
The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.
The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.
The FTC and FCC had not responded to a request for comment at the time of publication.
Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.
The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.
Now, thanks to a unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.
The most significant language from the decision from the circuit court seems to be this:
We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.
The American Civil Liberties Union came out in favor of the court’s ruling.
“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”
As April Glaser noted in Slate, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.
“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”
That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebook’s billion-strong population of users.
As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.
Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to Reuters.
Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.
Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit, those figures could amount to real money.
“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”
These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.
Tesla’s claims about the safety of its Model 3 electric vehicle prompted U.S. regulators to send a cease-and-desist letter and escalate the matter by asking the Federal Trade Commission to investigate, according to documents released by the nonprofit legal transparency website PlainSite.
The documents show correspondence between the lawyers at National Highway Traffic Safety Administration and Tesla that began after the automaker’s October 7 blog post that said the Model 3 had achieved the lowest probability of injury of any vehicle the agency ever tested. PlainSite received the 79 pages of communications since January 2018 between NHTSA and Tesla through a Freedom of Information Act request. There were 450 pages of communication that were withheld due to Tesla’s request for confidentiality on the basis of “trade secrets.”
NHTSA took issue with the blog post, arguing that Tesla’s claims were inconsistent with its advertising guidelines regarding crash ratings. The matter might have ended with that demand. But NHTSA took the issue further and informed Tesla it would ask the Federal Trade Commission to weigh in.
“This is not the first time that Tesla has disregarded the guidelines in a matter that may lead to consumer confusion and give Tesla an unfair market advantage,” the letter dated October 17 reads. “We have therefore also referred this matter to the Federal Trade Commission’s Bureau of Consumer Protection to investigate whether these statements constitute unfair or deceptive acts or practices.”
Tesla did not respond to a request for comment.
The automaker’s lawyers did, however, push back against NHTSA’s request, according to the correspondence released by PlainSite. Tesla lawyers argue in one letter that the company’s statements were neither “untrue nor misleading.”
“To the contrary, Tesla has provided consumers with fair and objective information to compare the relative safety of vehicles having 5-star overall ratings,” the letter from Tesla’s deputy general counsel.
The documents posted by PlainSite also showed NHTSA requested sales data on all Tesla vehicles produced since July 2016 with or without Autopilot, the automaker’s advanced driver assistance system. The agency also issued subpoenas to Tesla ordering it to produce information on several crashes, including a January 25, 2019 crash in San Ramon, Calif. The subpoenas requested information about the vehicle, its owner, history and videos and images related to the crash and were to be sent to NHTSA’s Office of Defects Investigations.
The Federal Reserve Bank announced today that it is developing a new service called FedNow that will allow all banks in the United States to offer 24/7 real-time payment services every day of the week. FedNow is expected to be available by 2023 or 2024 and will initially support transfers of up to $25,000.
FedNow will make managing budgets easier for many people and small businesses, but it also puts the Fed at loggerheads with big banks since a federal real-time payments system would compete with the one being developed by the Clearing House, which is owned by some of the world’s largest banks, including Capital One, Citibank, Wells Fargo, Bank of America, JP Morgan Chase and Deutsche Bank.
The Federal Reserve’s board of governors voted 4-1 to approve the proposal for FedNow on August 2, with its of vice chair for supervision, Randal Quarles, casting the dissenting vote.
While Venmo, Zelle and other apps already allow users to transfer money instantly to one another, the Federal Reserve Bank described services like those as a “closed loop” because both parties need to be on the same platform in order to transfer money and they can only be linked to accounts from certain banks. On the other hand, FedNow will be a universal infrastructure, enabling all banks, including smaller ones, to provide real-time payments.
Furthermore, the traditional retail payment methods used for transferring funds not only creates frustrating delays, but can “result in a build-up of financial obligations between banks which, as faster payment usage grows, could present risks to the financial system, especially in times of stress,” the Federal Reserve Board said.
In a FAQ, the Federal Reserve Board explained that “there is a broad consensus within the U.S. payment community and among other stakeholders” that real-time payment services can have a “significant and positive impact on individuals and businesses throughout the country and on the broader U.S. economy.”
For example, real-time payments mean people living on tight budgets will have to rely less on costly check-cashing services and high-interest loans and will incur less overdraft and late fees. Small businesses will also benefit because they can avoid short-term loans with high-interest rates.
The proposal has gained the support of Google’s head of payments, Caesar Sengupta, and Democratic lawmakers including U.S. Senators Elizabeth Warren and Chris Van Hollen and Representatives Ayanna Pressley and Jesús García.
Great to see today’s news on a real-time payments system in the US! We @Google welcome the Fed’s leadership here. This is a good step toward more economic opportunity and financial inclusion for everyone. https://t.co/Slb3jxFeTF
— Caesar Sengupta (@caesars) August 6, 2019
In a statement, Warren, who is campaigning for the Democratic presidential nomination, said “I’m glad the Fed has finally taken action to ensure that people living paycheck-to-paycheck don’t have to wait up to five days for a check to clear so that they can pay their rent, cover child care, or pick up groceries. Today’s Fed action will also help small businesses by making payments from customers available more quickly. I look forward to working with the Fed to ensure a swift and smooth implementation of this system.”
Comments about FedNow will be accepted for 90 days after the proposal is published in the Federal Register.
Cybereason, which uses machine learning to increase the number of endpoints a single analyst can manage across a network of distributed resources, has raised $200 million in new financing from SoftBank Group and its affiliates.
It’s a sign of the belief that SoftBank has in the technology, since the Japanese investment firm is basically doubling down on commitments it made to the Boston-based company four years ago.
The company first came to our attention five years ago when it raised a $25 million financing from investors including CRV, Spark Capital and Lockheed Martin.
Cybereason’s technology processes and analyzes data in real-time across an organization’s daily operations and relationships. It looks for anomalies in behavior across nodes on networks and uses those anomalies to flag suspicious activity.
The company also provides reporting tools to inform customers of the root cause, the timeline, the person involved in the breach or breaches, what tools they use and what information was being disseminated within and outside of the organization.
For founder Lior Div, Cybereason’s work is the continuation of the six years of training and service he spent working with the Israeli army’s 8200 Unit, the military incubator for half of the security startups pitching their wares today. After his time in the military, Div worked for the Israei government as a private contractor reverse engineering hacking operations.
Over the last two years, Cybereason has expanded the scope of its service to a network that spans 6 million endpoints tracked by 500 employees with offices in Boston, Tel Aviv, Tokyo and London.
“Cybereason’s big data analytics approach to mitigating cyber risk has fueled explosive expansion at the leading edge of the EDR domain, disrupting the EPP market. We are leading the wave, becoming the world’s most reliable and effective endpoint prevention and detection solution because of our technology, our people and our partners,” said Div, in a statement. “We help all security teams prevent more attacks, sooner, in ways that enable understanding and taking decisive action faster.”
The company said it will use the new funding to accelerate its sales and marketing efforts across all geographies and push further ahead with research and development to make more of its security operations autonomous.
“Today, there is a shortage of more than three million level 1-3 analysts,” said Yonatan Striem-Amit, chief technology officer and Co-founder, Cybereason, in a statement. “The new autonomous SOC enables SOC teams of the future to harness technology where manual work is being relied on today and it will elevate L1 analysts to spend time on higher value tasks and accelerate the advanced analysis L3 analysts do.”
That attack, which was either conducted by Chinese-backed actors or made to look like it was conducted by Chinese-backed actors, according to Cybereason targeted a select group of users in an effort to acquire cell phone records.
As we wrote at the time:
… hackers have systematically broken in to more than 10 cell networks around the world to date over the past seven years to obtain massive amounts of call records — including times and dates of calls, and their cell-based locations — on at least 20 individuals.
Researchers at Boston-based Cybereason, who discovered the operationand shared their findings with TechCrunch, said the hackers could track the physical location of any customer of the hacked telcos — including spies and politicians — using the call records.
Lior Div, Cybereason’s co-founder and chief executive, told TechCrunch it’s “massive-scale” espionage.
Call detail records — or CDRs — are the crown jewels of any intelligence agency’s collection efforts. These call records are highly detailed metadata logs generated by a phone provider to connect calls and messages from one person to another. Although they don’t include the recordings of calls or the contents of messages, they can offer detailed insight into a person’s life. The National Security Agency has for years controversially collected the call records of Americans from cell providers like AT&T and Verizon (which owns TechCrunch), despite the questionable legality.
It’s not the first time that Cybereason has uncovered major security threats.
Back when it had just raised capital from CRV and Spark, Cybereason’s chief executive was touting its work with a defense contractor who’d been hacked. Again, the suspected culprit was the Chinese government.
As we reported, during one of the early product demos for a private defense contractor, Cybereason identified a full-blown attack by the Chinese — ten thousand usernames and passwords were leaked, and the attackers had access to nearly half of the organization on a daily basis.
The security breach was too sensitive to be shared with the press, but Div says that the FBI was involved and that the company had no indication that they were being hacked until Cybereason detected it.
In an era of massive data breaches, most recently the Capital One fiasco, the risk of a cyberattack and the costly consequences are the top existential threat to corporations big and small. At TechCrunch’s first-ever enterprise-focused event (p.s. early bird sales end August 9), that topic will be front and center throughout the day.
That’s why we’re delighted to announce United’s chief information security officer Emily Heath will join TC Sessions: Enterprise in San Francisco on September 5, where we will discuss and learn how one of the world’s largest airlines keeps its networks safe.
At United, Heath oversees the airline’s cybersecurity program and its IT regulatory, governance and risk management.
The U.S.-based airline has more than 90,000 employees serving 4,500 flights a day to 338 airports, including New York, San Francisco, Los Angeles and Washington D.C.
A native of Manchester, U.K., Heath served as a former police detective in the U.K. Financial Crimes Unit where she led investigations into international investment fraud, money laundering, and large scale cases of identity theft — and running join investigations with the FBI, SEC, and London’s Serious Fraud Office.
Heath and her teams have been the recipients of CSO Magazine’s CSO50 Awards for their work in cybersecurity and risk.
At TC Sessions: Enterprise, Heath will join an expert panel of cybersecurity experts to discuss security on enterprise networks large and small — from preventing data from leaking to keeping bad actors out of their network — where we’ll lear how a modern CSO moves fast without breaking things.
Join hundreds of today’s leading enterprise experts for this single-day event when you purchase a ticket to the show. $249 Early Bird sale ends Friday, August 9. Make sure to grab your tickets today and save $100 before prices go up.
Imagine a world where drones deliver emergency medical supplies to people in need. Or shuttle commuters from place to place avoiding ground traffic and breaking down the geographical divide between cities and suburbs.
Autonomous drones could spray pesticides on crops, monitor construction sites, or film adventure seekers skiing down mountains.
Potential drone applications are limited only by our imaginations… and our ability to operate them safely.
Right now the FAA is working together with the tech industry to build new rules of engagement to ensure that these unmanned aerial vehicles avoid a collision without the eyes of human pilots.
These regulatory hurdles are the last obstacle to letting the market for autonomous drones soar.
Aircraft, from Cessna 152s to Airbus 380s, are the most common vehicles that populate and cruise the airspace. And they all require a pilot’s eyes to operate.
In the case of commercial aircraft, they also require a Mode C Transponder, which determines the aircraft’s altitude. All commercial and most private aircraft also have to communicate with Air Traffic Control (ATC), people on the ground who logistically coordinate which aircraft taxi, take off, and land at airports.
This is a system with three layers of protection and vigilance to avoid horrific and often fatal crashes: ATC, Mode C, and the pilot’s eyes. With unmanned aircraft, this universally accepted system for keeping the National Airspace System safe will have to change.
Objects that don’t have humans on board can’t use eyes to avoid collision, or communicate and listen to ATC. As a substitute for humans and eyes, these vehicles must rely on several new systems, largely using artificial intelligence. And these systems will require that the current ecosystem operate digitally in real time.
Image courtesy of Getty Images
UTM strives for the digital sharing of each user’s planned flight details with the goal for each user (e.g. “pilot”) to have the same situational awareness of the airspace. That way, as drones navigate the skies they will know whether or not they’re impeding any other traffic in the sky. Companies like Airmap and Unifly are working on such systems. While UTM is a great idea, it’s hard to standardize such a system because it will be nearly impossible to convince or require 100 percent of manned pilots to use it. There will always be non-cooperative aircraft, just like there will always be automobile drivers who do not wear their seat belts even though the benefits of doing so are obvious and compelling.
Therefore, we will need to aggregate other layers of safety on top of UTM to ensure our airspace is safe. Specifically, the Mode C Transponder will be updated to an Automatic Dependent Surveillance-Broadcast (ADS-B) sensor. This enhanced sensor transmits aircraft-to-aircraft and provides significant additional real-time precision to help both pilots and controllers achieve shared situational awareness.
As the last critical layer of safety and redundancy, if for whatever reason a manned aircraft pilot doesn’t enter her flight path in the UTM system or the aircraft doesn’t have a working ADS-B sensor, both common scenarios, the drone needs to have an equivalent to the pilot’s eyes to avoid another flying object.
Bessemer portfolio company, Iris Automation has developed a system that provides drones with eyes in the sky. The system uses a small module with a camera that feeds into a real-time, onboard computer vision and deep learning algorithm to detect, track, classify, and if needed, avoid, other objects in the airspace to keep the drone safe throughout its flight.
At present, Iris is the only company with an FAA beyond visual line-of-sight (BVLOS) waiver and is the only certified detect and avoid (DAA) system in the market. The BVLOS system doesn’t require a visual observer on the ground and the entire system fits into the palm of your hand, weighing only 350 grams.
One day drones will ubiquitously operate in our airspace preventing disasters and making our lives safer, easier, and better. They’ll put out fires, deliver late night take out, and inspect our infrastructures such as bridges, railways, and pipelines.
In order for all this to happen, it will be necessary for pilots, drone users, manufacturers, and the FAA to embrace and use this new digital technology and ecosystem. When they do, the virtually endless possibilities associated with drone technology can be realized in a way that will make the skies even safer.
Grab popcorn. As Internet fights go this one deserves your full attention — because the fight is over your attention. Your eyeballs and the creepy ads that trade data on you to try to swivel ’em.
In the blue corner, the Internet Advertising Association’s CEO, Randall Rothenberg, who has been taking to Twitter increasingly loudly in recent days to savage Europe’s privacy framework, the GDPR, and bleat dire warnings about California’s Consumer Privacy Act (CCPA) — including amplifying studies he claims show “the negative impact” on publishers.
Exhibit A, tweeted August 1:
NB: The IAB is a mixed membership industry organization which combines advertisers, brands, publishers, data brokers* and adtech platform tech giants — including the dominant adtech duopoly, Google and Facebook, who take home ~60% of digital ad spend. The only entity capable of putting a dent in the duopoly, Amazon, is also in the club. Its membership reflects the sprawling interests attached to the online ad industry, and, well, the personal data that currently feeds it (your eyeballs again!), although some members clearly have pots more money to spend on lobbying against digital privacy regs than others.
In a what now looks to have been deleted tweet last month Rothenberg publicly professed himself proud to have Facebook as a member of his ‘publisher defence’ club. Though, admittedly, per the above tweet, he’s also worried about brands and retailers getting “killed”. He doesn’t need to worry about Google and Facebook’s demise because that would just be ridiculous.
Now, in the — I wish I could call it ‘red top’ corner, except these newspaper guys are anything but tabloid — we find premium publishers biting back at Rothenberg’s attempts to trash-talk online privacy legislation.
Here’s the New York Times‘ data governance & privacy guy, Robin Berjon, demolishing Rothenberg via the exquisite medium of quote-tweet…
One of the primary reasons we need the #GDPR and #CCPA (and more) today is because the @iab, under @r2rothenberg's leadership, has been given 20 years to self-regulate and has used the time to do [checks notes] nothing whatsoever.https://t.co/hBS9d671LU
— Robin Berjon (@robinberjon) August 1, 2019
I’m going to quote Berjon in full because every single tweet packs a beautifully articulated punch:
Next time Facebook talks about how it can self-regulate its access to data I suggest you cc that entire thread.
Also chipping in on Twitter to champion Berjon’s view about the IAB’s leadership vacuum in cleaning up the creepy online ad complex, is Aram Zucker-Scharff, aka the ad engineering director at — checks notes — The Washington Post.
His punch is more of a jab — but one that’s no less painful for the IAB’s current leadership.
“I say this rarely, but this is a must read,” he writes, in a quote tweet pointing to Berjon’s entire thread.
I say this rarely, but this is a must read, Thread: https://t.co/FxKmT9bp7r
— Aram Zucker-Scharff (@Chronotope) August 2, 2019
Another top tier publisher’s commercial chief also told us in confidence that they “totally agree with Robin” — although they didn’t want to go on the record today.
In an interesting twist to this ‘mixed member online ad industry association vs people who work with ads and data at actual publishers’ slugfest, Rothenberg replied to Berjon’s thread, literally thanking him for the absolute battering.
“Yes, thank you – that’s exactly where we’re at & why these pieces are important!” he tweeted, presumably still dazed and confused from all the body blows he’d just taken. “@iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations,@robinberjon.”
Yes, thank you – that’s exactly where we’re at & why these pieces are important! @iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations, @robinberjon & @Bershidsky https://t.co/WDxrWIyHXd
— Randall Rothenberg (@r2rothenberg) August 2, 2019
Rothenberg also took the time to thank Bloomberg columnist, Leonid Bershidsky, who’d chipped into the thread to point out that the article Rothenberg had furiously retweeted actually says the GDPR “should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong”.
Who is Bershidsky? Er, just the author of the article Rothenberg tried to nega-spin. So… uh… owned.
May I point out that the piece that's cited here (mine) says the GDPR should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong?
— Leonid Bershidsky (@Bershidsky) August 1, 2019
But there’s more! Berjon tweeted a response to Rothenberg’s thanks for what the latter tortuously referred to as “your explorations” — I mean, the mind just boggles as to what he was thinking to come up with that euphemism — thanking him for reversing his position on GDPR, and for reversing his prior leadership vacuum on supporting robustly enforced online privacy laws.
“It’s great to hear that you’re now supporting strong GDPR enforcement,” he writes. “It’s indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?”
It's great to hear that you're now supporting strong GDPR enforcement. It's indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?
— Robin Berjon (@robinberjon) August 2, 2019
We’ve asked the IAB if, in light of Rothenberg’s tweet, it now wishes to share a public statement in support of transposing the GDPR into US law. We’ll be sure to update this post if it says anything at all.
We’ve also screengrabbed the vinegar strokes of this epic fight — as an insurance policy against any further instances of the IAB hitting the tweet delete button. (Plus, I mean, you might want to print it out and get it framed.)
Some light related reading can be found here:
Another day, another massive data breach.
This time it’s the financial giant and credit card issuer Capital One, which revealed on Monday a credit file breach affecting 100 million Americans and 6 million Canadians. Consumers and small businesses affected are those who obtained one of the company’s credit cards dating back to 2005.
That includes names, addresses, phone numbers, dates of birth, self-reported income and more credit card application data — including over 140,000 Social Security numbers in the U.S., and more than a million in Canada.
The FBI already has a suspect in custody. Seattle resident and software developer Paige A. Thompson, 33, was arrested and detained pending trial. She’s been accused of stealing data by breaching a web application firewall, which was supposed to protect it.
Sound familiar? It should. Just last week, credit rating giant Equifax settled for more than $575 million over a date breach it had — and hid from the public for several months — two years prior.
Why should we be surprised? Equifax faced zero fallout until its eventual fine. All talk, much bluster, but otherwise little action.
Equifax’s chief executive Richard Smith “retired” before he was fired, allowing him to keep his substantial pension packet. Lawmakers grilled the company but nothing happened. An investigation launched by the former head of the Consumer Financial Protection Bureau, the governmental body responsible for protecting consumers from fraud, declined to pursue the company. The FTC took its sweet time to issue its fine — which amounted to about 20% of the company’s annual revenue for 2018. For one of the most damaging breaches to the U.S. population since the breach of classified vetting files at the Office of Personnel Management in 2015, Equifax got off lightly.
Legislatively, nothing has changed. Equifax remains as much of a “victim” in the eyes of the law as it was before — technically, but much to the ire of the millions affected who were forced to freeze their credit as a result.
Mark Warner, a Democratic senator serving Virginia, along with his colleague since turned presidential candidate Elizabeth Warren, was tough on the company, calling for it to do more to protect consumer data. With his colleagues, he called on the credit agencies to face penalties to the top brass and extortionate fines to hold the companies accountable — and to send a message to others that they can’t play fast and loose with our data again.
But Congress didn’t bite. Warner told TechCrunch at the time that there was “a failure of the company, but also of lawmakers” for not taking action.
Lo and behold, it happened again. Without a congressional intervention, Capital One is likely to face largely the same rigmarole as Equifax did.
Blame the lawmakers all you want. They had their part to play in this. But fool us twice, shame on the credit companies for not properly taking action in the first place.
The Equifax incident should have sparked a fire under the credit giants. The breach was the canary in the coal mine. We watched and waited to see what would happen as the canary’s lifeless body emerged — but, much to the American public’s chagrin, no action came of it. The companies continued on with the mentality that “it could happen to us, but probably won’t.” It was always going to happen again unless there was something to force the companies to act.
Companies continue to vacuum up our data — knowingly and otherwise — and don’t do enough to protect it. As much as we can have laws to protect consumers from this happening again, these breaches will continue so long as the companies continue to collect our data and not take their data security responsibilities seriously.
We had an opportunity to stop these kinds of breaches from happening again, yet in the two years passed we’ve barely grappled with the basic concepts of internet security. All we have to show for it is a meager fine.
Thompson faces five years in prison and a fine of up to $250,000.
Everyone else faces just another major intrusion into their personal lives. Not at the hands of the hacker per se, but the companies that collect our data — with our consent and often without — and take far too many liberties with it.
Google’s Pixel 4 is coming out later this year, and it’s getting the long-reveal treatment thanks to a decision this year from Google to go ahead and spill some of the beans early, rather than saving everything for one big, final unveiling closer to availability. A new video posted by Google today about the forthcoming Pixel 4 (which likely won’t actually be available until fall) shows off some features new to this generation: Motion control and face unlock.
The new “Motion Sense” feature in the Pixel 4 will detect waves of your hand and translate them into software control, including skipping songs, snoozing alarms and quieting incoming phone call alerts, with more planned features to come, according to Google. It’s based on Soli, a radar-based fine motion detection technology that Google first revealed at its I/O annual developer conference in 2016. Soli can detect very fine movements, including fingers pinched together to mimic a watch-winding motion, and it got approval from the FCC in January, hinting it would finally be arriving in production devices this year.
Pixel 4 is the first shipping device to include Soli, and Google says it’ll be available in “select Pixel countries” at launch (probably due to similar approvals requirements wherever it rolls out to consumers).
Google also teased “Face unlock,” something it has supported in Android previously — but Google is doing it very differently with the Pixel 4 than it has been handled on Android in the past. Once again, Soli is part of its implementation, turning on the face unlock sensors in the device as it detects your hand reaching to pick up the device. Google says this should mean that the phone will be unlocked by the time you’re ready to use it, as it does this all on the fly, and works from pretty much any authentication.
Face unlock will be supported for authorizing payments and logging into Android apps, as well, and all of the facial recognition processing done for face unlock will occur on the device — a privacy-oriented feature that’s similar to how Apple handles its own Face ID. In fact, Google also will be storing all the facial recognition data securely in its own dedicated on-device Titan M security chip, another move similar to Apple’s own approach.
Google made the Pixel 4 official and tweeted photos (or maybe photorealistic renders) of the new smartphone back in June, bucking the trend of keeping things unconfirmed until an official reveal closer to release. Based on this update, it seems likely we can expect to learn more about the new smartphone ahead of its availability, which is probably going to happen sometime around October, based on past behavior.
Hello, weekenders. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.
Last week, I talked about how services like Instagram had moved beyond letting their algorithms take over the curation process as they tested minimizing key user metrics such as “like” counts on the platform.
John Taggart/Bloomberg via Getty Images
The big news stories this week intimately involved the government poking its head into the tech industry. What was clear between the two biggest stories, the DoJ approving the Sprint/T -Mobile merger and the FTC giving Facebook a $5 billion slap on the wrist, is that big tech has little to worry about its inertia being contained.
It seems the argument from Spring and T-Mobile that it was better to have three big telecom companies in the U.S. rather than two contenders and two pretenders, seems to have stuck. Similarly, Facebook seems to have done a worthy job of indicating that it will handle the complicated privacy stuff but that they’ll let the government orgs see what they’re up to.
Fundamentally, none of these orgs seem to want to harm the growth of these American tech companies and I have a tough time believing that perspective is going to magically get more toothy in some of these early antitrust investigations. The government might be making a more concerted effort to understand how these businesses are structured, but even focusing solely on something like the cloud businesses of Microsoft, Google and Amazon, I have little doubt that the government is going to spend an awfully long time in the observation phase.
The danger is erraticism and for that the worst government fear for tech isn’t a three-letter agency, it’s the Twitter ramblings of POTUS.
Onto the rest of the week’s news.
(Photo: ALASTAIR PIKE,THOMAS SAMSON/AFP/Getty Images)
Here are a few big news items from big companies, with green links to all the sweet, sweet added context:
How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:
Our premium subscription service had another week of interesting deep dives. This week, my colleague Danny spoke with some top VCs about why fintech startups have been raising massive amounts of cash and he seemed to walk away with some interesting impressions.
“…The biggest challenge that has faced fintech companies for years — really, the industry’s consistent Achilles’ heel — is the cost of acquiring a customer. Financial customer relationships are incredibly valuable, and the cost of acquiring a user for any product is among the most expensive in every major channel.
And those costs are going up…”
Here are some of our other top reads for premium subscribers.
We’re excited to announce The Station, a new TechCrunch newsletter all about mobility. Each week, in addition to curating the biggest transportation news, Kirsten Korosec will provide analysis, original reporting and insider tips. Sign up here to get The Station in your inbox beginning in August.
The Senate Select Committee on Intelligence today released the first volume of its bipartisan investigation into Russia’s attempts to interfere with the 2016 U.S. elections.
Helmed by Select Committee Chairman Richard Burr, the Republican from North Carolina, and Virginia Democratic Senator Mark Warner, who serves as Vice Chairman, the committee’s report “Russian Efforts Against Election Infrastructure,” details the unclassified summary findings on election security.
Through two and a half years the committee has held 15 open hearings, interviewed over 200 witnesses, and reviewed nearly 400,000 documents, according to a statement and will be publishing other volumes from its investigation over the next year.
“In 2016, the U.S. was unprepared at all levels of government for a concerted attack from a determined foreign adversary on our election infrastructure. Since then, we have learned much more about the nature of Russia’s cyber activities and better understand the real and urgent threat they pose,” Committee Chairman Burr said in a statement. “The Department of Homeland Security and state and local elections officials have dramatically changed how they approach election security, working together to bridge gaps in information sharing and shore up vulnerabilities.”
Both Sen. Burr and Sen. Warner said that additional steps still needed to be taken.
“[There’s] still much more we can and must do to protect our elections. I hope the bipartisan findings and recommendations outlined in this report will underscore to the White House and all of our colleagues, regardless of political party, that this threat remains urgent, and we have a responsibility to defend our democracy against it.”
Among the Committee’s findings were that Russian hackers exploited the seams between federal and state authorities. State election officials, the report found were not sufficiently warned or prepared to handle an attack from a state actor.
The warnings that were provided by the Federal Bureau of Investigation and the Department of Homeland Security weren’t detailed enough nor did they contain enough relevant information that would have encouraged the states to take threats more seriously, the report indicated.
More work still needs to be done, according to the Committee. DHS needs to coordinate its efforts with state officials much more closely. But states need to do more as well to ensure that new voting machines have a voter-verified paper trail.
So does Congress. The committee report underscores that Congress need to evaluate the results of the $380 million in state security grants which were issued under the Help America Vote Act and ensure that additional funding is available to address any security gaps in voting systems and technologies around the U.S.
Finally, the U.S. needs to create more appropriate deterrence mechanisms to enable the country to respond effectively to cyber attacks on elections.
The Committee’s support for greater spending on election security and refining electoral policy to ensure safe and secure access to the ballot, comes as Senate majority leader, Mitch McConnell of Kentucky has blocked two election security measures that were attempting to come before the Senate floor for a vote.
New York Democratic Senator Chuck Schumer, tried to get consent to pass a House bill that requires the use of paper ballots and included new funding for the Election Assistance Commission.
In a statement explaining his rejection of the Bill, McConnell told The Hill, “Clearly this request is not a serious effort to make a law. Clearly something so partisan that it only received one single solitary Republican vote in the House is not going to travel through the Senate by unanimous consent.”
McConnell also rejected a consent motion to pass legislation that would require that candidates, campaign officials, and family members to reach out to the FBI if they received offers of assistance from foreign governments.
This chimes with a court filing that emerged earlier this year — which also suggested Facebook knew of concerns about the controversial data company earlier than it had publicly said, including in repeat testimony to a U.K. parliamentary committee last year.
Facebook only finally kicked the controversial data firm off its ad platform in March 2018 when investigative journalists had blown the lid off the story.
In a section of the SEC complaint on “red flags” raised about the scandal-hit company Cambridge Analytica’s potential misuse of Facebook user data, the SEC complaint reveals that it already knew of concerns raised by staffers in its political advertising unit — who described CA as a “sketchy (to say the least) data modeling company that has penetrated our market deeply.”
Amid a flurry of major headlines for the company yesterday, including a $5 billion FTC fine — all of which was selectively dumped on the same day media attention was focused on Mueller’s testimony before Congress — Facebook quietly disclosed it had also agreed to pay $100 million to the SEC to settle a complaint over failures to properly disclose data abuse risks to its investors.
This tidbit was slipped out toward the end of a lengthy blog post by Facebook general counsel Colin Stretch, which focused on responding to the FTC order with promises to turn over a new leaf on privacy.
As my TC colleague Devin Coldewey wrote yesterday, the FTC settlement amounts to a ‘“get out of jail” card for the company’s senior execs by granting them blanket immunity from known and unknown past data crimes.
“Historic fine” is therefore quite the spin to put on being rich enough and powerful enough to own the rule of law.
And by nesting its disclosure of the SEC settlement inside effusive privacy washing discussion of the FTC’s “historic” action, Facebook looks to be hoping to detract attention from some really awkward details in its narrative about the Cambridge Analytica scandal that highlight ongoing inconsistencies and contradictions, to put it politely.
The SEC complaint underlines that Facebook staff were aware of the dubious activity of Cambridge Analytica on its platform prior to the December 2015 Guardian story — which CEO Mark Zuckerberg has repeatedly claimed was when he personally became aware of the problem.
Asked about the details in the SEC document, a Facebook spokesman pointed us to comments it made earlier this year when court filings emerged that also suggested staff knew in September 2015. In this statement, from March, it says “employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service,” and further claims it was “not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015,” adding: “When Facebook learned about Kogan’s breach of Facebook’s data use policies, we took action.”
Facebook staffers were also aware of concerns about Cambridge Analytica’s “sketchy” business when, around November 2015, Facebook employed psychology researcher Joseph Chancellor — aka the co-founder of app developer GSR — who, as Facebook has sought to paint it, is the “rogue” developer that breached its platform policies by selling Facebook user data to Cambridge Analytica.
This means Facebook employed a man who had breached its own platform policies by selling user data to a data company which Facebook’s own staff had urged, months prior, be investigated for policy-violating scraping of Facebook data, per the SEC complaint.
Fast-forward to March 2018 and press reports revealing the scale and intent of the Cambridge Analytica data heist blew up into a global data scandal for Facebook, wiping billions off its share price.
The really awkward question that Facebook has continued not to answer — and which every lawmaker, journalist and investor should therefore be putting to the company at every available opportunity — is why it employed GSR co-founder Chancellor in the first place?
Chancellor has never been made available by Facebook to the media for questions. He also quietly left Facebook last fall — we must assume with a generous exit package in exchange for his continued silence. (Assume because neither Facebook nor Chancellor have explained how he came to be hired.)
At the time of his departure, Facebook also made no comment on the reasons for Chancellor leaving — beyond confirming he had left.
Facebook has never given a straight answer on why it hired Chancellor. See, for example, its written response to a Senate Commerce Committee’s question — which is pure, textbook misdirection, responding with irrelevant details that do not explain how Facebook came to identify him for a role at the company in the first place (“Mr. Chancellor is a quantitative researcher on the User Experience Research team at Facebook, whose work focuses on aspects of virtual reality. We are investigating Mr. Chancellor’s prior work with Kogan through counsel”).
What was the outcome of Facebook’s internal investigation of Chancellor’s prior work? We don’t know because again Facebook isn’t saying anything.
More importantly, the company has continued to stonewall on why it hired someone intimately linked to a massive political data scandal that’s now just landed it a “historic fine.”
We asked Facebook to explain why it hired Chancellor — given what the SEC complaint shows it knew of Cambridge Analytica’s “sketchy” dealings — and got the same non-answer in response: “Mr Chancellor was a quantitative researcher on the User Experience Research team at Facebook, whose work focused on aspects of virtual reality. He is no longer employed by Facebook.”
We’ve asked Facebook to clarify why Chancellor was hired despite internal staff concerns linked to the company to which his company was set up to sell Facebook data; and how of all possible professionals it could hire Facebook identified Chancellor in the first place — and will update this post with any response. (A search for “quantitative researcher” on LinkedIn’s platform returns more than 177,000 results of professionals who are using the descriptor in their profiles.)
Earlier this month a U.K. parliamentary committee accused the company of contradicting itself in separate testimonies on both sides of the Atlantic over knowledge of improper data access by third-party apps.
The committee grilled multiple Facebook and Cambridge Analytica employees (and/or former employees) last year as part of a wide-ranging enquiry into online disinformation and the use of social media data for political campaigning — calling in its final report for Facebook to face privacy and antitrust probes.
A spokeswoman for the DCMS committee told us it will be writing to Facebook next week to ask for further clarification of testimonies given last year in light of the timeline contained in the SEC complaint.
Under questioning in Congress last year, Facebook founder Zuckerberg also personally told Congressman Mike Doyle that Facebook had first learned about Cambridge Analytica using Facebook data as a result of the December 2015 Guardian article.
Yet, as the SEC complaint underlines, Facebook staff had raised concerns months earlier. So, er, awkward.
There are more awkward details in the SEC complaint that Facebook seems keen to bury, too — including that as part of a signed settlement agreement, GSR’s other co-founder, Aleksandr Kogan, told it in June 2016 that he had, in addition to transferring modeled personality profile data on 30 million Facebook users to Cambridge Analytica, sold the latter “a substantial quantity of the underlying Facebook data” on the same set of individuals he’d profiled.
This U.S. Facebook user data included personal information such as names, locations, birthdays, gender and a sub-set of page likes.
Raw Facebook data being grabbed and sold does add some rather colorful shading around the standard Facebook line — i.e. that its business is nothing to do with selling user data. Colorful because while Facebook itself might not sell user data — it just rents access to your data and thereby sells your attention — the company has built a platform that others have repurposed as a marketplace for exactly that, and done so right under its nose…
The SEC complaint also reveals that more than 30 Facebook employees across different corporate groups learned of Kogan’s platform policy violations — including senior managers in its comms, legal, ops, policy and privacy divisions.
The U.K.’s data watchdog previously identified three senior managers at Facebook who it said were involved in email exchanges prior to December 2015 regarding the GSR/Cambridge Analytica breach of Facebook users data, though it has not made public the names of the staff in question.
The SEC complaint suggests a far larger number of Facebook staffers knew of concerns about Cambridge Analytica earlier than the company narrative has implied up to now. Although the exact timeline of when all the staffers knew is not clear from the document — with the discussed period being September 2015 to April 2017.
Despite 30+ Facebook employees being aware of GSR’s policy violation and misuse of Facebook data — by April 2017 at the latest — the company leaders had put no reporting structures in place for them to be able to pass the information to regulators.
“Facebook had no specific policies or procedures in place to assess or analyze this information for the purposes of making accurate disclosures in Facebook’s periodic filings,” the SEC notes.
The complaint goes on to document various additional “red flags” it says were raised to Facebook throughout 2016 suggesting Cambridge Analytica was misusing user data — including various press reports on the company’s use of personality profiles to target ads; and staff in Facebook’s own political ads unit being aware that the company was naming Facebook and Instagram ad audiences by personality trait to certain clients, including advocacy groups, a commercial enterprise and a political action committee.
“Despite Facebook’s suspicions about Cambridge and the red flags raised after The Guardian article, Facebook did not consider how this information should have informed the risk disclosures in its periodic filings about the possible misuse of user data,” the SEC adds.
The FTC just announced the details of its settlement agreement with Facebook over years of privacy practices in violation of a previous order. To say the settlement is favorable to Facebook, even with the record $5 billion penalty, is an understatement; the company’s lawyers are probably popping champagne right about now. Here’s why.
$5 billion may sound like a lot, but in this context it is simply not a meaningful amount. Leaving aside that Facebook at this point probably makes that in a month, it simply does not correspond to the harm done or rewards reaped.
It’s highly likely that Facebook’s “unjust enrichment,” made as a result of the forbidden user data collection in which it engaged, is more than $5 billion. As Commissioner Rohit Chopra says in his dissenting statement, “breaking the law has to be riskier than following it.” In other words, you shouldn’t be able to steal $100, then pay a fine of $50 to get off the hook.
“The fact that Facebook’s stock value increased with the disclosure of a potential $5 billion penalty may suggest that the market believes that a penalty at this level makes a violation profitable,” wrote Commissioner Rebecca Kelly Slaughter in her own dissent.
In the case of Google, which in spirit is similar to this one, the settlement with the FTC amounted to several times the company’s unjust enrichment. Why isn’t that the case with Facebook? Because the investigation didn’t look into it.
No one likes it when serious investigations of wrongdoing (not that Facebook officially admits to any) drag on for too long, since in the meantime the wrongdoing may very well continue. But this case isn’t a simple one where Facebook may have violated one or two of the FTC’s prohibitions for a short period of time in 2014. The company ignored the government-ordered restrictions systematically for years, meriting an investigation on a similar scale.
Instead of getting deep into the questions of who was responsible, how much money was made, whether public statements were misleading, the extent of public harm, etc, the investigators opted to quickly establish a pattern of violating behavior and slap the company with a nice round number. (Let’s hope the antitrust investigation announced today is a bit more thorough.)
The brevity and limitations of the investigation are evident from the fact that…
“The Commissioners supporting this outcome do not cite a single deposition of Zuckerberg or any other Facebook officer or director,” writes Chopra. Although there may have been off-record conversations or letters from execs in response to questions sent by investigators, they did not put Zuckerberg or Sandberg or any other big players in the hot seat. Seems fundamental when the investigation alleges complicity at the highest levels, right?
But not only were no executives put to the question…
“I started Facebook, and at the end of the day I’m responsible for what happens on our platform,” wrote Mark Zuckerberg last year during the fracas surrounding his questioning by Congress. Nor is that only his opinion. There is a great deal of precedent for leveling additional, complementary charges at executives alongside those aimed at the company. They might not even need testimony to do it:
“I believe there is already sufficient evidence, including through public statements, to support a charge against Mark Zuckerberg for violating the 2012 order,” writes Chopra, and Commissioner Slaughter concurred. Even if that weren’t the case, they could state with certainty that leadership, if it was not directly complicit in rulebreaking, at least failed in their responsibility to prevent it.
Going after individuals, however, may involve separate fact-finding work, expensive and time-consuming litigation, and of course the risk that after all that, the judge will rule against the FTC and officially exonerate the defendant and set an unsavory precedent. They may have decided that risk was too great, but surely if some revealing information comes to light tomorrow individual charges may result.
It’s ordinary in settlements like to this to “release” companies from claims that they violated an agreement — like a plea bargain where you get probation and no record in exchange for a fine and community service. But the Facebook settlement gives both the company and its executives blanket immunity, not just for any violations the FTC has claimed, but for any violations it hasn’t claimed.
In other words, it’s giving Facebook a blank slate not only for violations it definitely did, but for any it might have secretly done between 2012 and 2018. “A release of this scope is unjustified by our investigation and unsupported by either precedent or sound public policy,” writes Slaughter. “I have not been able to find a single Commission order — certainly not one against a repeat offender — that contains a release as broad as this one,” concurs Chopra.
It’s extraordinary that a repeat offender that has shown a disdain for the FTC’s authority would get such comprehensive, top-to-bottom immunity. This isn’t just a plea bargain, it’s a plenary indulgence.
This was perhaps the FTC’s best chance to lay down strong rules as to what Facebook can and can’t do with user data going forward — especially considering the previous ones were shrugged off. Instead, apart from a few new rules like better notification of facial recognition systems, it basically just told Facebook it can do what it wants as long as it files the paperwork.
The settlement requires Facebook to document lots of things. If a new product is a potential risk, Facebook has to write a report on what data will be collected, how it will notify users, whether they can opt out, and how it is (and isn’t) planning to reduce that risk. Nowhere does the FTC spell out what constitutes unreasonable risk, minimum notification or opt-out requirements, or whether a product or strategy (like absorbing WhatsApp) is automatically suspect.
“It is akin to if federal regulators, instead of ordering automakers to install seatbelts, ordered them to document the pros and cons of installing seatbelts, and to decide for themselves whether it would be worthwhile,” writes Chopra.
As long as it files its paperwork, Facebook is free to decide what constitutes risk, damage to users, and how it should handle those things. It’s a bit like asking a bank robber to write a journal. But even if someone reads it and finds something objectionable…
Facebook must establish a Privacy Committee, Compliance Officers, and an Independent Assessor to make sure that the rules it sets for itself are sufficient and being followed sufficiently. Unfortunately, what they do is a whole lot of reviewing, certifying, and briefing, and no doing.
The Compliance Officers sign off on the privacy program, to be sure, but they have few specific goals, like prevent this or ensure that. The Assessor also lacks authority, so if they decide the privacy program is not working out, they simply register their complaint and wait for Facebook to justify itself.
The “independent” committee’s makeup will be highly affected by the powers that be at Facebook, which have enormous voting power and will be able to make it hard on any troublesome members. Even if they couldn’t, the committee has no power over management — it’s just another Facebook-issued stamp for Facebook-written paperwork.
Not pictured: revolving door at front entrance
As The Hill’s Harper Neidig points out: Sean Royall, Facebook’s head counsel in these proceedings, was deputy director at the FTC’s Competition Bureau (not the Bureau of Consumer Protection, which led this action) from 2001-2003. His boss at the bureau then was Joseph Simons — the current chairman of the FTC.
It’s probably just a coincidence.
Nothing in this order challenges the fundamental problem that over the last decade has increasingly caused friction between Facebook and both its users and (supposed) regulators: that its business model is predicated on mass collection of personal data on its users, which it distills then sells to advertisers.
That’s a business model that should give any consumer protection regulator pause, and yet this settlement is a tacit endorsement of it. The order really amounts to little more than additional paperwork for Facebook to fill out while it pursues its original course without any divergence.
To be fair, the FTC is a reactive agency and as such is limited by in how much it can really require proactively. But it doesn’t seem like they were testing those limits today. The decision not to litigate, the unimaginative penalty amount, and the eye-popping immunity grant suggest the agency is working comfortably within them and just wanted to get this thing out the door.
The requirements of the settlement were barely even considered on today’s earnings call, on which there appeared to be an understanding that it wouldn’t affect much if anything at all. Even the fear that Zuckerberg voiced earlier today that it would require hiring a thousand people who might otherwise be working on new products (a questionable claim, incidentally) went unaddressed.
This was an opportunity for the FTC to demonstrate that the U.S. is a venue where global internet companies like Facebook can still be held accountable for their actions. It was made clear today that not only will a big check change that, but that the check doesn’t even have to be that big.