Google today introduced a new mobile management and security solution, Android Enterprise Essentials, which, despite its name, is actually aimed at small to medium-sized businesses. The company explains this solution leverages Google’s experience in building Android Enterprise device management and security tools for larger organizations in order to come up with a simpler solution for those businesses with smaller budgets.
The new service includes the basics in mobile device management, with features that allow smaller businesses to require their employees to use a lock screen and encryption to protect company data. It also prevents users from installing apps outside the Google Play Store via the Google Play Protect service, and allows businesses to remotely wipe all the company data from phones that are lost or stolen.
As Google explains, smaller companies often handle customer data on mobile devices, but many of today’s remote device management solutions are too complex for small business owners, and are often complicated to get up-and-running.
Android Enterprise Essentials attempts to make the overall setup process easier by eliminating the need to manually activate each device. And because the security policies are applied remotely, there’s nothing the employees themselves have to configure on their own phones. Instead, businesses that want to use the new solution will just buy Android devices from a reseller to hand out or ship to employees with policies already in place.
Though primarily aimed at smaller companies, Google notes the solution may work for select larger organizations that want to extend some basic protections to devices that don’t require more advanced management solutions. The new service can also help companies get started with securing their mobile device inventory, before they move up to more sophisticated solutions over time, including those from third-party vendors.
The company has been working to better position Android devices for use in workplace over the past several years, with programs like Android for Work, Android Enterprise Recommended, partnerships focused on ridding the Play Store of malware, advanced device protections for high-risk users, endpoint management solutions, and more.
Google says it will roll out Android Enterprise Essentials initially with distributors Synnex in the U.S. and Tech Data in the U.K. In the future, it will make the service available through additional resellers as it takes the solution global in early 2021. Google will also host an online launch event and demo in January for interested customers.
NTreatment, a technology company that manages electronic health and patient records for doctors and psychiatrists, left thousands of sensitive health records exposed to the internet because one of its cloud servers wasn’t protected with a password.
The cloud storage server server was hosted on Microsoft Azure and contained 109,000 files, a large portion of which contained lab test results from third-party providers like LabCorp, medical records, doctor’s notes, insurance claims, and other sensitive health data for patients across the U.S., a class of data considered protected health information under the Health Insurance Portability and Accountability Act (HIPAA). Running afoul of HIPAA can result in steep fines.
None of the data was encrypted, and nearly all of the sensitive files were viewable in the browser. Some of the medical records belonged to children.
TechCrunch found the exposed data as part of a separate investigation. It wasn’t initially clear who owned the storage server, but many of the electronic health records that TechCrunch reviewed in an effort to trace the source of the data spillage were tied to doctors and psychiatrists and healthcare workers working at hospitals or networks known to use nTreatment. The storage server also contained some internal company documents, including a non-disclosure agreement with a major prescriptions provider.
The data was secured on Monday after TechCrunch contacted the company. In an email, NTreatment co-founder Gregory Katz said the server was “used as a general purpose storage,” but did not say how long the server was exposed.
Katz said the company would notify affected providers and regulators of the incident.
It’s the latest in a series of incidents involving the exposure of medical data. Earlier this year we found a bug in LabCorp’s website that exposed thousands of lab results, and reported on the vast amounts of medical imaging floating around the web.
The UK government has squeezed the timetable for domestic telcos to stop installing 5G kit from Chinese suppliers, per the BBC, which reports that the deadline for installation of kit from so-called ‘high risk’ vendors is now September.
It had already announced a ban on telcos buying kit from Huawei et al by the end of this year — acting on national security concerns attached to companies that fall under the jurisdiction of Chinese state surveillance laws. But, according to the BBC, ministers are concerned carriers could stockpile kit for near-term installation to create an optional buffer for themselves since it has allowed until 2027 for them to remove such kit from existing 5G networks. Maintaining already installed equipment will also still be allowed up til then.
A Telecommunications Security Bill which will allow the government to identify kit as a national security risk and ban its use in domestic networks is slated to be introduced to parliament tomorrow.
Digital secretary Oliver Dowden told the BBC he’s pushing for the “complete removal of high-risk vendors”.
In July the government said changes to the US sanction regime meant it could no longer manage the security risk attached to Chinese kit makers.
The move represented a major U-turn from the policy position announced in January — when the UK said it would allowed Chinese vendors to play a limited role in supplying domestic networks. However the plan faced vocal opposition from the government’s own back benches, as well as high profile pressure from the US — which has pushed allies to expel Huawei entirely.
Alongside policies to restrict the use of high risk 5G vendors the UK has said it will take steps to encourage newcomers to enter the market to tackle concerns that the resulting lack of suppliers introduces another security risk.
Publishing a supply chain diversification strategy for 5G today, Dowden warns that barring “high risk” vendors leaves the country “overly reliant on too few suppliers”.
“This 5G Diversification Strategy is a clear and ambitious plan to grow our telecoms supply chain while ensuring it is resilient to future trends and threats,” he writes. “It has three core strands: supporting incumbent suppliers; attracting new suppliers into the UK market; and accelerating the development and deployment of open-interface solutions.”
The government is putting an initial £250 million behind the 5G diversification plan to try to build momentum. behind increasing competition and interoperability.
“Achieving this long term vision depends on removing the barriers that prevent new market entrants from joining the supply chain, investing in R&D to support the accelerated development and deployment of interoperable deployment models, and international collaboration and policy coordination between national governments and industry,” it writes.
In the short to medium term the government says it will proritize support for existing suppliers — so the likely near term beneficiary of the strategy is Finland’s Nokia.
Though the government also says it will “seek to attract new suppliers to the UK market in order to start the process of diversification as soon as possible”.
“As part of our approach we will prioritise opportunities to build UK capability in key areas of the supply chain,” it writes, adding: “As we progress this activity we look forward to working with network operators in the UK, telecoms suppliers and international governments to achieve our shared goals of a more competitive and vibrant telecoms supply market.”
We’ve reached out to Huawei for comment on the new deadline for UK carriers to stop installing its 5G kit.
The Supreme Court will hear arguments on Monday in a case that could lead to sweeping changes to America’s controversial computer hacking laws — and affecting how millions use their computers and access online services.
The Computer Fraud and Abuse Act was signed into federal law in 1986 and predates the modern internet as we know it, but governs to this day what constitutes hacking — or “unauthorized” access to a computer or network. The controversial law was designed to prosecute hackers, but has been dubbed as the “worst law” in the technology law books by critics who say it’s outdated and vague language fails to protect good-faith hackers from finding and disclosing security vulnerabilities.
At the center of the case is Nathan Van Buren, a former police sergeant in Georgia. Van Buren used his access to a police license plate database to search for an acquaintance in exchange for cash. Van Buren was caught, and prosecuted on two counts: accepting a kickback for accessing the police database, and violating the CFAA. The first conviction was overturned, but the CFAA conviction was upheld.
Van Buren may have been allowed to access the database by way of his police work, but whether he exceeded his access remains the key legal question.
Orin Kerr, a law professor at the University of California, Berkeley, said Van Buren vs. United States was an “ideal case” for the Supreme Court to take up. “The question couldn’t be presented more cleanly,” he argued in a blog post in April.
The Supreme Court will try to clarify the decades-old law by deciding what the law means by “unauthorized” access. But that’s not a simple answer in itself.
How the Supreme Court will determine what “unauthorized” means is anybody’s guess. The court could define unauthorized access anywhere from violating a site’s terms of service to logging into a system that a person has no user account for.
Pfefferkorn said a broad reading of the CFAA could criminalize anything from lying on a dating profile, sharing the password to a streaming service, or using a work computer for personal use in violation of an employer’s policies.
But the Supreme Court’s eventual ruling could also have broad ramifications on good-faith hackers and security researchers, who purposefully break systems in order to make them more secure. Hackers and security researchers have for decades operated in a legal grey area because the law as written exposes their work to prosecution, even if the goal is to improve cybersecurity.
Tech companies have for years encouraged hackers to privately reach out with security bugs. In return, the companies fix their systems and pay the hackers for their work. Mozilla, Dropbox, and Tesla are among the few companies that have gone a step further by promising not to sue good-faith hackers under the CFAA. Not all companies welcome the scrutiny and bucked the trend by threatening to sue researchers over their findings, and in some cases actively launching legal action to prevent unflattering headlines.
Security researchers are no stranger to legal threats, but a decision by the Supreme Court that rules against Van Buren could have a chilling effect on their work, and drive vulnerability disclosure underground.
“If there are potential criminal (and civil) consequences for violating a computerized system’s usage policy, that would empower the owners of such systems to prohibit bona fide security research and to silence researchers from disclosing any vulnerabilities they find in those systems,” said Pfefferkorn. “Even inadvertently coloring outside the lines of a set of bug bounty rules could expose a researcher to liability.”
“The Court now has the chance to resolve the ambiguity over the law’s scope and make it safer for security researchers to do their badly-needed work by narrowly construing the CFAA,” said Pfefferkorn. “We can ill afford to scare off people who want to improve cybersecurity.”
The Supreme Court will likely rule on the case later this year, or early next.
U.S. Fertility, one of the largest networks of fertility clinics in the United States, has confirmed it was hit by a ransomware attack and that data was taken.
The company was formed in May as a partnership between Shady Grove Fertility, a fertility clinic with dozens of locations across the U.S. East Coast, and Amulet Capital Partners, a private equity firm that invests largely in the healthcare space. As a joint venture, U.S. Fertility now claims 55 locations across the U.S., including California.
In a statement, U.S. Fertility said that the hackers “acquired a limited number of files” during the month that they were in its systems, until the ransomware was triggered on September 14. That’s a common technique of data-stealing ransomware, which steals data before encrypting the victim’s network for ransom. Some ransomware groups publish the stolen files on their websites if their ransom demand isn’t paid.
U.S. Fertility said some personal information, like names and addresses, were taken in the attack. Some patients also had their Social Security numbers taken. But the company warned that the attack may have involved protected health information. Under U.S. law, that can include information about a person’s health or medical conditions, like test results and medical records.
When reached, Amulet spokesperson Melissa Sheer declined to comment further or answer any of our questions.
U.S. Fertility didn’t say why it took more than two months to publicly disclose the attack, but said in the notice that its disclosure was not delayed at the request of law enforcement.
This is the latest attack targeting the healthcare sector. In September, one of the largest hospital systems in the U.S., Universal Health Services, was hit by the Ryuk ransomware, forcing some affected emergency rooms to close and to turn patients away. Several other fertility clinics have been attacked by ransomware in recent months.
The Federal Communications Commission has rejected ZTE’s petition to remove its designation as a “national security threat.” This means that American companies will continue to be barred from using the FCC’s $8.3 billion Universal Service Fund to buy equipment and services from ZTE .
The Universal Service Fund includes subsidies to build telecommunication infrastructure across the United States, especially for low-income or high-cost areas, rural telehealth services, and schools and libraries. The FCC issued an order on June 30 banning U.S. companies from using the fund to buy technology from Huawei and ZTE, claiming that both companies have close ties with the Chinese Communist Party and military.
Many smaller carriers rely on Huawei and ZTE, two of the world’s biggest telecom equipment providers, for cost-efficient technology. After surveying carriers, the FCC estimated in September that replacing Huawei and ZTE equipment would cost more than $1.8 billion.
Under the Secure and Trusted Communications Networks Act, passed by Congress this year, most of that amount would be eligible for reimbursements under a program referred to as “rip and replace.” But the program has not been funded by Congress yet, despite bipartisan support.
In today’s announcement about ZTE, chairman Ajit Pai also said the FCC will vote on rules to implement the reimbursement program at its next Open Meeting, scheduled to take place on December 10.
The FCC passed its order barring companies deemed national security threats from receiving money from the Universal Service Fund in November 2019. Huawei fought back by suing the FCC over the ban, claiming it exceeded the agency’s authority and violated the Constitution.
TechCrunch has contacted ZTE for comment.
Data platform Splunk continues to make acquisitions as it works to build out its recently launched observability platform. After acquiring Plumbr and Rigor last month, the company today announced that it has acquired Flowmill, a Palo Alto-based network observability startup. Flowmill focuses on helping its users find network performance issues in their cloud infrastructure in real time and measure their traffic by service to help them control cost.
Like so many other companies in this space now, Flowmill utilizes eBPF, the Linux kernel’s relatively new capability to run sandboxed code inside it without having to change the kernel or load kernel modules. That makes it ideal for monitoring applications.
“Observability technology is rapidly increasing in both sophistication and ability to help organizations revolutionize how they monitor their infrastructure and applications. Flowmill’s innovative NPM solution provides real-time observability into network behavior and performance of distributed cloud applications, leveraging extended Berkeley Packet Filter (eBPF) technologies,” said Tim Tully, Splunk’s chief technology officer. “We’re excited to bring Flowmill’s visionary NPM technology into our Observability Suite as Splunk continues to deliver best-in-class observability capabilities to our customers.”
While Spunk has made some larger acquisitions, including its $1.05 billion purchase of SignalFx, it’s building out its observability platform by picking up small startups that offer very specific capabilities. It could probably build all of these features in-house, but the company clearly believes that it has to move fast to get a foothold in this growing market as enterprises look for new observability tools as they modernize their tech stacks.
“Flowmill’s approach to building systems that support full-fidelity, real-time, high-cardinality ingestions and analysis aligns well with Splunk’s vision for observability,” said Flowmill CEO Jonathan Perry. “We’re thrilled to join Splunk and bring eBPF, next-generation NPM to the Splunk Observability Suite.”
The companies didn’t disclose the purchase price, but Flowmill previously raised funding from Amplify, Felicis Ventures, WestWave Capital and UpWest.
Trump’s election denialism saw him retaliate in a way that isn’t just putting the remainder of his presidency in jeopardy, it’s already putting the next administration in harm’s way.
In a stunning display of retaliation, Trump fired CISA director Chris Krebs last week after declaring that there was “no evidence that any voting system deleted or lost votes, changed votes or was in any way compromised,” a direct contradiction to the conspiracy-fueled fever dreams of the president who repeatedly claimed, without evidence, that the election had been hijacked by the Democrats. CISA is left distracted by disarray, with multiple senior leaders leaving their posts — some walked, some were pushed — only for the next likely chief to stumble before he even starts because of concerns with his security clearance.
Until yesterday, Biden’s presidential transition team was stuck in cybersecurity purgatory because the incumbent administration refused to trigger the law that grants the incoming team access to government resources, including cybersecurity protections. That’s left the incoming president exposed to ongoing cyber threats, all while being shut out from classified briefings that describe those threats in detail.
As Biden builds his team, Silicon Valley is also gearing up for a change in government — and temperament. But don’t expect too much of the backlash to change. Much of the antitrust allegations, privacy violations and net neutrality remain hot button issues, and the tech titans resorting to cheap “charm offenses” are likely to face the music under the Biden administration — whether they like it or not.
Here’s more from the week.
Apple and Facebook are back in the ring, fighting over which company is a bigger existential threat to privacy. In a letter to a privacy rights group, Apple said its new anti-tracking feature will launch next year, which will give users the choice of blocking in-app tracking, a move that’s largely expected to cause havoc to the online advertising industry and data brokers.
Given an explicit option between being tracked and not, as the feature will do, most are expected to decline.
Apple’s letter specifically called out Facebook for showing a “disregard for user privacy.” Facebook, which made more than 98% of its global revenue last year from advertising, took its own potshot back at Apple, claiming the iPhone maker was “using their dominant market position to self-preference their own data collection, while making it nearly impossible for their competitors to use the same data.”
Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.
The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”
But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”
Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”
The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.
For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.
Since the start of the COVID-19 pandemic, countries — and states in places like the U.S. — have rushed to build contact-tracing apps to help prevent the spread of the virus. But these apps vary wildly in terms of functionality and privacy.
Most have adopted the more privacy-friendly approach of using Bluetooth to trace people with the virus with which you may have come into contact. Many have chosen to implement the Apple-Google system, which hundreds of academics have backed. But others, like Israel and Pakistan, are using more privacy-invasive techniques, like tracking location data, which governments can also use to monitor a person’s whereabouts. In Israel’s case, the tracking was so controversial that the courts shut it down.
Australia’s intelligence watchdog did not say specifically what data was collected by the spy agencies. The app uses Bluetooth and not location data, but the app requires the user to upload some personal information — like their name, age, postal code and phone number — to allow the government’s health department to contact those who may have come into contact with an infected person.
Australia has seen more than 27,800 confirmed coronavirus cases and more than 900 deaths since the start of the pandemic.
Finally. It only took almost three weeks, but the Biden-Harris transition has officially begun.
On Monday, the General Services Administration gave the green light for the Biden-Harris team to transition from political campaign to government administration, allowing the team to receive government resources like office space, but also classified briefings and secure computers. And, with it, comes a shiny new
Transitioning is an obscure part of the law that’s rarely discussed, in large part because outgoing governments and incoming administrations largely get on and try to maintain continuity of government through a peaceful transition of power. The process is formally triggered by the General Services Administration, the lesser-known federal agency tasked with the basic functioning of government, and allows the incoming administration to receive funds, tools, and resources to prepare for entering government.
But this time around, the agency’s head Emily Murphy had been reluctant to trigger the formal transition period after the Trump campaign filed a number of lawsuits challenging the election.
Murphy finally approved the transition on Monday after Michigan certified its election results.
Up until now, the Biden-Harris team
buildbackbetter.com to host its transition website. Now it’s hosted at
buildbackbetter.gov, a departure from the
ptt.gov domain used by the incoming Obama-Biden administration in 2008.
The Wall Street Journal reported last week that until now the Biden-Harris team was using a Google Workspace for email and collaboration, secured with hardware security keys that staff need to log into their accounts. That setup might suffice for an enterprise, but had security experts worried that the lack of government cybersecurity support could make the camp more vulnerable to attacks.
As for the domain, which you might not think much about, the shift to a
.gov domain marks a significant step forwards in the camp’s cybersecurity efforts. Government domains, hosted on the
.gov domain, are toughened to prevent against domain hijacking or spoofing. In simple terms, they’re far more resilient than your regular web hosting services.
Biden tweeted out the domain marking the change.
Twitter is the latest social media site to allow users to experiment with posting disappearing content. Fleets, as Twitter calls them, allows its mobile users post short stories, like photos or videos with overlaying text, that are set to vanish after 24 hours.
But a bug meant that fleets weren’t deleting properly and could still be accessed long after 24 hours had expired. Details of the bug were posted in a series of tweets on Saturday, less than a week after the feature launched.
full disclosure: scraping fleets from public accounts without triggering the read notification
the endpoint is: https://t.co/332FH7TEmN
— cathode gay tube (@donk_enby) November 20, 2020
The bug effectively allowed anyone to access and download a user’s fleets without triggering a notification that the user’s fleet had been read and by whom. The implication is that this bug could be abused to archive a user’s fleets after they expire.
Using an app that’s designed to interact with Twitter’s back-end systems via its developer API. What returned was a list of fleets from the server. Each fleet had its own direct URL, which when opened in a browser would load the fleet as an image or a video. But even after the 24 hours elapsed, the server would still return links to fleets that had already disappeared from view in the Twitter app.
When reached, a Twitter spokesperson said a fix was on the way. “We’re aware of a bug accessible through a technical workaround where some Fleets media URLs may be accessible after 24 hours. We are working on a fix that should be rolled out shortly.”
Twitter acknowledged that the fix means that fleets should now expire properly, it said it won’t delete the fleet from its servers for up to 30 days — and that it may hold onto fleets for longer if they violate its rules. We checked that we could still load fleets from their direct URLs even after they expire.
Fleet with caution.
The security sector is ever frothy and acquisitive. Just last week Palo Alto Networks grabbed Expanse for $800 million. Today it was FireEye’s turn snagging Respond Software, a company that helps customers investigate and understand security incidents, while reducing the need for highly trained and scarce security analysts. The deal has closed, according to the company.
FireEye had its eye on Respond’s Analyst product, which it plans to fold into to its Mandiant Solutions platform. Like many companies today, FireEye is focused on using machine learning to help bolster its solutions and bring a level of automation to sorting through the data, finding real issues and weeding out false positives. The acquisition gives them a quick influx of machine learning-fueled software.
FireEye sees a product that can help add speed to its existing tooling.”With Mandiant’s position on the front lines, we know what to look for in an attack, and Respond’s cloud-based machine learning productizes our expertise to deliver faster outcomes and protect more customers,” Kevin Mandia, FireEye CEO said in a statement announcing the deal.
Mike Armistead, CEO at Respond, wrote in a company blog post that today’s acquisition marks the end of a 4-year journey for the startup, but it believes it has landed in a good home with FireEye. “We are proud to announce that after many months of discussion, we are becoming part of the Mandiant Solutions portfolio, a solution organization inside FireEye,” Armistead wrote.
While FireEye was at it, it also announced a $400 million investment from Blackstone Tactical Opportunities fund and ClearSky (an investor in Respond), giving the public company a new influx of cash to make additional moves like the acquisition it made today.
It didn’t come cheap. “Under the terms of its investment, Blackstone and ClearSky will purchase $400 million in shares of a newly designated 4.5% Series A Convertible Preferred Stock of FireEye (the “Series A Preferred”), with a purchase price of $1,000 per share. The Series A Preferred will be convertible into shares of FireEye’s common stock at a conversion price of $18.00 per share,” the company explained in a statement. The stock closed at $14.24 today.
Respond, which was founded in 2016, raised $32 million including a $12 million Series A in 2017 led by CRV and Foundation Capital and a $20 million Series B led by ClearSky last year, according to Crunchbase data.
Facebook has today filed another lawsuit against a company acting in violations of its terms of service. In this case, the company has sued Ensar Sahinturk, a Turkish national who operated of a network of Instagram clone sites, according to court filings. Facebook says Sahinturk used automation software to scrape Instagram users’ public profiles, photos, and videos from over 100,000 accounts without permission, and this data was then published on his network of websites.
In the filing, Facebook says it became aware of the clone website network a year ago, in November 2019. It learned that the defendant had controlled a number of domains, many with names that were similar to Instagram, including jolygram.com, imggram.com, imggram.net, finalgram.com, pikdo.net, and ingram.ws. The first in that list, jolygram.com, had been in use since August 2017. The others were registered in later years as the network expanded. Finalgram.com was the latest that was put to use, and has been in operation since Oct. 2019.
Facebook doesn’t say how large these sites were, in terms of visitors, but described the clone network to TechCrunch as having “voluminous traffic.”
In addition to being what Facebook claims are trademark violations associated with these domains, the sites were populated with data that was pulled from Instagram’s website through automated scraping — that is, via specialized software that pretends to be a human instead of a bot to access data.
The defendant was able to evade Instagram’s security measures against automated tools of this nature by making it look like the requests to Facebook’s servers were coming from a person using the official Instagram app, the complaint states.
The defendant had programmed his scraping software by creating and using thousands of fake Instagram accounts that would mimic actions that real, legitimate users of the Instagram app could have taken. Facebook said the number of fake accounts used daily could be very high. On April 17, 2020, the defendant used over 7,700 accounts to make automated requests to Facebook servers, for example. On April 22, 2020, he used over 9,000.
On the clone websites created, users were able to enter in any Instagram username and then view their public profiles, photos, videos, Stories, hashtags, and location. The clone sites also allowed visitors to download the pictures and videos that had been posted on Instagram, a feature that Instagram doesn’t directly offer. (Its official website and app don’t offer a “save” button.)
Facebook attempted to protect against these various terms of service violations in 2019, when it disabled approximately 30,000 fake Instagram accounts operated by the defendant. It also sent a series of Cease and Desist letters and shut down further Instagram and Facebook accounts, including one Facebook Page belonging to the defendant. However, the defendant claimed he didn’t operate jolygram.com, it was just registered under his name. But he also said he had shut it down.
Facebook claims the resources it used to investigate and attempt to resolve the issues with the defendant’s operations have topped $25,000 and is asking for damages to be determined during the trial.
The lawsuit is now one of many Facebook has filed in the years that followed the Cambridge Analytica scandal, where millions of Facebook users’ data has harvested without their permission. Facebook has since sued analytics firms misusing its data, developers who violated its terms to sell fake “Likes,” and other marketing intelligence operations. However, the company tells TechCrunch this is the first Instagram lawsuit against clone websites.
For the past year and a half, Google has been rolling out its next-generation messaging to Android users to replace the old, clunky, and insecure SMS text messaging. Now the company says that rollout is complete, and plans to bring end-to-end encryption to Android messages next year.
Google’s Rich Communications Services is Android’s answer to Apple’s iMessage, and brings typing indicators, read receipts, and you’d expect from most messaging apps these days.
In a blog post Thursday, Google said it plans to roll out end-to-end encryption — starting with one-on-one conversations — leaving open the possibility of end-to-end encrypted group chats. It’ll become available to beta testers, who can sign up here, beginning later in November and continue into the new year.
End-to-end encryption prevents anyone — even Google — from reading messages as they travel between sender and the recipient.
Google dipped its toes into the end-to-end encrypted messaging space in 2016 with the launch of Allo, an app that immediately drew criticism from security experts for not enabling the security feature by default. Two years later, Google killed off the project altogether.
This time around, Google learned its lesson. Android messages will default to end-to-end encryption once the feature becomes available, and won’t revert back to SMS unless the users in the conversation loses or disables RCS.