FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — April 22nd 2021Your RSS feeds

Window Snyder’s new startup Thistle Technologies raises $2.5M seed to secure IoT devices

By Zack Whittaker

The Internet of Things has a security problem. The past decade has seen wave after wave of new internet-connected devices, from sensors through to webcams and smart home tech, often manufactured in bulk but with little — if any — consideration to security. Worse, many device manufacturers make no effort to fix security flaws, while others simply leave out the software update mechanisms needed to deliver patches altogether.

That sets up an entire swath of insecure and unpatchable devices to fail, and destined to be thrown out when they break down or are invariably hacked.

Security veteran Window Snyder thinks there is a better way. Her new startup, Thistle Technologies, is backed with $2.5 million in seed funding from True Ventures with the goal of helping IoT manufacturers reliably and securely deliver software updates to their devices.

Snyder founded Thistle last year, and named it after the flowering plant with sharp prickles designed to deter animals from eating them. “It’s a defense mechanism,” Snyder told TechCrunch, a name that’s fitting for a defensive technology company. The startup aims to help device manufacturers without the personnel or resources to integrate update mechanisms into their device’s software in order to receive security updates and better defend against security threats.

“We’re building the means so that they don’t have to do it themselves. They want to spend the time building customer-facing features anyway,” said Snyder. Prior to founding Thistle, Snyder worked in senior cybersecurity positions at Apple, Intel, and Microsoft, and also served as chief security officer at Mozilla, Square, and Fastly.

Thistle lands on the security scene at a time when IoT needs it most. Botnet operators are known to scan the internet for devices with weak default passwords and hijack their internet connections to pummel victims with floods of internet traffic, knocking entire websites and networks offline. In 2016, a record-breaking distributed denial-of-service attack launched by the Mirai botnet on internet infrastructure giant Dyn knocked some of the biggest websites — Shopify, SoundCloud, Spotify, Twitter — offline for hours. Mirai had ensnared thousands of IoT devices into its network at the time of the attack.

Other malicious hackers target IoT devices as a way to get a foot into a victim’s network, allowing them to launch attacks or plant malware from the inside.

Since device manufacturers have done little to solve their security problems among themselves, lawmakers are looking at legislating to curb some of the more egregious security mistakes made by default manufacturers, like using default — and often unchangeable — passwords and selling devices with no way to deliver security updates.

California paved the way after passing an IoT security law in 2018, with the U.K. following shortly after in 2019. The U.S. has no federal law governing basic IoT security standards.

Snyder said the push to introduce IoT cybersecurity laws could be “an easy way for folks to get into compliance” without having to hire fleets of security engineers. Having an update mechanism in place also helps to keeps the IoT devices around for longer — potentially for years longer — simply by being able to push fixes and new features.

“To build the infrastructure that’s going to allow you to continue to make those devices resilient and deliver new functionality through software, that’s an incredible opportunity for these device manufacturers. And so I’m building a security infrastructure company to support that security needs,” she said.

With the seed round in the bank, Snyder said the company is focused on hiring device and back-end engineers, product managers, and building new partnerships with device manufacturers.

Phil Black, co-founder of True Ventures — Thistle’s seed round investor — described the company as “an astute and natural next step in security technologies.” He added: “Window has so many of the qualities we look for in founders. She has deep domain expertise, is highly respected within the security community, and she’s driven by a deep passion to evolve her industry.”

Apple and Google pressed in antitrust hearing on whether app stores share data with product development teams

By Sarah Perez

In today’s antitrust hearing in the U.S. Senate, Apple and Google representatives were questioned on whether they have a “strict firewall” or other internal policies in place that prevent them from leveraging the data from third-party businesses operating on their app stores to inform the development of their own competitive products. Apple, in particular, was called out for the practice of copying other apps by Senator Richard Blumenthal (D-CT), who said the practice had become so common that it earned a nickname with Apple’s developer community: “sherlocking.”

Sherlock, which has its own Wikipedia entry under software, comes from Apple’s search tool in the early 2000s called Sherlock. A third-party developer, Karelia Software, created an alternative tool called Watson. Following the success of Karelia’s product, Apple added Watson’s same functionality into its own search tool, and Watson was effectively put out of business. The nickname “Sherlock” later became shorthand for any time Apple copies an idea from a third-party developer that threatens to or even destroys their business.

Over the years, developers claimed Apple has “sherlocked” a number of apps, including Konfabulator (desktop widgets), iPodderX (podcast manager), Sandvox (app for building websites) and Growl (a notification system for Mac OS X) and, in more recent years, F.lux (blue light reduction tool for screens) Duet and Luna (apps that makes iPad a secondary display), as well as various screen-time-management tools. Now Tile claims Apple has also unfairly entered its market with AirTag.

During his questioning, Blumenthal asked Apple and Google’s representatives at the hearing — Kyle Andeer, Apple’s chief compliance officer and Wilson White, Google’s senior director of Public Policy & Government Relations, respectively — if they employed any sort of “firewall” in between their app stores and their business strategy.

Andeer somewhat dodged the question, saying, “Senator, if I understand the question correctly, we have separate teams that manage the App Store and that are engaged in product development strategy here at Apple.”

Blumenthal then clarified what he meant by “firewall.” He explained that it doesn’t mean whether or not there are separate teams in place, but whether there’s an internal prohibition on sharing data between the App Store and the people who run Apple’s other businesses.

Andeer then answered, “Senator, we have controls in place.”

He went on to note that over the past 12 years, Apple has only introduced “a handful of applications and services,” and in every instance, there are “dozens of alternatives” on the App Store. And, sometimes, the alternatives are more popular than Apple’s own product, he noted.

“We don’t copy. We don’t kill. What we do is offer up a new choice and a new innovation,” Andeer stated.

His argument may hold true when there are strong rivalries, like Spotify versus Apple Music, or Netflix versus Apple TV+, or Kindle versus Apple Books. But it’s harder to stretch it to areas where Apple makes smaller enhancements — like when Apple introduced Sidecar, a feature that allowed users to make their iPad a secondary display. Sidecar ended the need for a third-party app, after apps like Duet and Luna first proved the market.

Another example was when Apple built screen-time controls into its iOS software, but didn’t provide the makers of third-party screen-time apps with an API so consumers could use their preferred apps to configure Apple’s Screen Time settings via the third-party’s specialized interface or take advantage of other unique features.

Blumenthal said he interpreted Andeer’s response as to whether Apple has a “data firewall” as a “no.”

Posed the same question, Google’s representative, White, said his understanding was that Google had “data access controls in place that govern how data from our third-party services are used.”

Blumenthal pressed him to clarify if this was a “firewall,” meaning, he clarified again, “do you have a prohibition against access?”

“We have a prohibition against using our third-party services to compete directly with our first-party services,” White said, adding that Google has “internal policies that govern that.”

The senator said he would follow up on this matter with written questions, as his time expired.

Before yesterdayYour RSS feeds

Data scientists: Bring the narrative to the forefront

By Ram Iyer
Peter Wang Contributor
Peter Wang is CEO and co-founder of data science platform Anaconda. He’s also a co-creator of the PyData community and conferences, and a member of the board at the Center for Humane Technology.

By 2025, 463 exabytes of data will be created each day, according to some estimates. (For perspective, one exabyte of storage could hold 50,000 years of DVD-quality video.) It’s now easier than ever to translate physical and digital actions into data, and businesses of all types have raced to amass as much data as possible in order to gain a competitive edge.

However, in our collective infatuation with data (and obtaining more of it), what’s often overlooked is the role that storytelling plays in extracting real value from data.

The reality is that data by itself is insufficient to really influence human behavior. Whether the goal is to improve a business’ bottom line or convince people to stay home amid a pandemic, it’s the narrative that compels action, rather than the numbers alone. As more data is collected and analyzed, communication and storytelling will become even more integral in the data science discipline because of their role in separating the signal from the noise.

Yet this can be an area where data scientists struggle. In Anaconda’s 2020 State of Data Science survey of more than 2,300 data scientists, nearly a quarter of respondents said that their data science or machine learning (ML) teams lacked communication skills. This may be one reason why roughly 40% of respondents said they were able to effectively demonstrate business impact “only sometimes” or “almost never.”

The best data practitioners must be as skilled in storytelling as they are in coding and deploying models — and yes, this extends beyond creating visualizations to accompany reports. Here are some recommendations for how data scientists can situate their results within larger contextual narratives.

Make the abstract more tangible

Ever-growing datasets help machine learning models better understand the scope of a problem space, but more data does not necessarily help with human comprehension. Even for the most left-brain of thinkers, it’s not in our nature to understand large abstract numbers or things like marginal improvements in accuracy. This is why it’s important to include points of reference in your storytelling that make data tangible.

For example, throughout the pandemic, we’ve been bombarded with countless statistics around case counts, death rates, positivity rates, and more. While all of this data is important, tools like interactive maps and conversations around reproduction numbers are more effective than massive data dumps in terms of providing context, conveying risk, and, consequently, helping change behaviors as needed. In working with numbers, data practitioners have a responsibility to provide the necessary structure so that the data can be understood by the intended audience.

Facebook brings software subscriptions to the Oculus Quest

By Lucas Matney

Subscription pricing is landing on Facebook’s Oculus Store, giving VR developers another way to monetize content on Facebook’s Oculus Quest headset.

Developers will be allowed to add premium subscriptions to paid or free apps, with Facebook assumedly dragging in their standard percentage fee at the same time. Oculus and the developers on its platform have been riding the success of the company’s recent Quest 2 headset, which Facebook hasn’t detailed sales numbers on but has noted that the months-old $299 headset has already outsold every other Oculus headset sold to date.

Subscription pricing is an unsurprising development but signals that some developers believe they have a loyal enough group of subscribers to bring in sizable bits of recurring revenue. Facebook shipped the first Oculus Rift just over five years ago, and it’s been a zig-zagging path to finding early consumer success during that time. A big challenge for them has been building a dynamic developer ecosystem that offer something engaging to users while ensuring that VR devs can operate sustainably.

At launch, there are already a few developers debuting subscriptions for a number of different app types, spanning exercise, meditation, social, productivity and DJing. In addition to subscriptions, the new monetization path also allows developers to let users try out paid apps on a free trial basis.

The central question is how many Quest users there are that utilize their devices enough to justify a number of monthly subscriptions, but for developers looking to monetize their hardcore users, this is another utility that they likely felt was missing from the Oculus Store.

Grocery startup Mercato spilled years of data, but didn’t tell its customers

By Zack Whittaker

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

Google’s FeedBurner moves to a new infrastructure but loses its email subscription service

By Frederic Lardinois

Google today announced that it is moving FeedBurner to a new infrastructure but also deprecating its email subscription service.

If you’re an internet user of a certain age, chances are you used Google’s FeedBurner to manage the RSS feeds of your personal blogs and early podcasts at some point. During the Web 2.0 era, it was the de facto standard for feed management and analytics, after all. Founded in 2004, with Dick Costolo as one of its co-founders (before he became Twitter’s CEO in 2010), it was acquired by Google in 2007.

Ever since, FeedBurner lingered in an odd kind of limbo. While Google had no qualms shutting down popular services like Google Reader in favor of its ill-fated social experiments like Google+, FeedBurner just kept burning feeds day in and day out, even as Google slowly deprecated some parts of the service, most notably its advertising integrations.

I don’t know that anybody spent a lot of time thinking about the service and RSS has slowly (and sadly) fallen into obscurity, yet the service was probably easy enough to maintain that Google kept it going. And despite everything, shutting it down would probably break enough tools for publishers to create quite an uproar. The TechCrunch RSS feed, to which you are surely subscribed in your desktop RSS reader, is http://feeds.feedburner.com/TechCrunch/, after all.

So here we are, 14 years later, and Google today announced that it is “making several upcoming changes to support the product’s next chapter.” It’s moving the service to a new, more stable infrastructure.

But in July, it is also shutting down some non-core features that don’t directly involve feed management, most importantly the FeedBurner email subscription service that allowed you to get emailed alerts when a feed updates. Feed owners will be able to download their email subscriber lists (and will be able to do so after July, too). With that, Blogger’s FollowByEmail widget will also be deprecated (and hey, did you start this day thinking you’d read about FeedBurner AND Blogger on TechCrunch without having to travel back to 2007?).

Google stresses that other core FeedBurner features will remain in place, but given the popularity of email newsletters, that’s a bit of an odd move.

Gay dating site Manhunt hacked, thousands of accounts stolen

By Zack Whittaker

Manhunt, a gay dating app that claims to have 6 million male members, has confirmed it was hit by a data breach in February after a hacker gained access to the company’s accounts database.

In a notice filed with the Washington attorney general’s office, Manhunt said the hacker “gained access to a database that stored account credentials for Manhunt users,” and “downloaded the usernames, email addresses and passwords for a subset of our users in early February 2021.

The notice did not say how the passwords were scrambled, if at all, to prevent them from being read by humans. Passwords scrambled using weak algorithms can sometimes be decoded into plain text, allowing malicious hackers to break into their accounts.

Following the breach, Manhunt force-reset account passwords began alerting users in mid-March. Manhunt did not say what percentage of its users had their data stolen or how the data breach happened, but said that more than 7,700 Washington state residents were affected.

The company’s attorneys did not reply to an email requesting comment.

But questions remain about how Manhunt handled the breach. In March, the company tweeted that, “At this time, all Manhunt users are required to update their password to ensure it meets the updated password requirements.” The tweet did not say that user accounts had been stolen.

Manhunt was launched in 2001 by Online-Buddies Inc., which also offered gay dating app Jack’d before it was sold to Perry Street in 2019 for an undisclosed sum. Just months before the sale, Jack’d had a security lapse that exposed users’ private photos and location data.

Dating sites store some of the most sensitive information on their users, and are frequently a target of malicious hackers. In 2015, Ashley Madison, a dating site that encouraged users to have an affair, was hacked, exposing names, and postal and email addresses. Several people died by suicide after the stolen data was posted online. A year later, dating site AdultFriendFinder was hacked, exposing more than 400 million user accounts.

In 2018, same-sex dating app Grindr made headlines for sharing users’ HIV status with data analytics firms.

In other cases, poor security — in some cases none at all — led to data spills involving some of the most sensitive data. In 2019, Rela, a popular dating app for gay and queer women in China, left a server unsecured with no password, allowing anyone to access sensitive data — including sexual orientation and geolocation — on more than 5 million app users. Months later, Jewish dating app JCrush exposed around 200,000 user records.

Read more: 


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

PlexTrac raises $10M Series A round for its collaboration-centric security platform

By Frederic Lardinois

PlexTrac, a Boise, ID-based security service that aims to provide a unified workflow automation platform for red and blue teams, today announced that it has raised a $10 million Series A funding round led by Noro-Moseley Partners and Madrona Venture Group. StageDot0 ventures also participated in this round, which the company plans to use to build out its team and grow its platform.

With this new round, the company, which was founded in 2018, has now raised a total of $11 million, with StageDot0 leading its 2019 seed round.

PlexTrac CEO and President Dan DeCloss

PlexTrac CEO and President Dan DeCloss

“I have been on both sides of the fence, the specialist who comes in and does the assessment, produces that 300-page report and then comes back a year later to find that some of the critical issues had not been addressed at all.  And not because the organization didn’t want to but because it was lost in that report,” PlexTrac CEO and President Dan DeCloss said. “These are some of the most critical findings for an entity from a risk perspective. By making it collaborative, both red and blue teams are united on the same goal we all share, to protect the network and assets.”

With an extensive career in security that included time as a penetration tester for Veracode and the Mayo Clinic, as well as senior information security advisor for Anthem, among other roles, DeCloss has quite a bit of first-hand experience that led him to found PlexTrac. Specifically, he believes that it’s important to break down the wall between offense-focused red teams and defense-centric blue teams.

Image Credits: PlexTrac

 

 

“Historically there has been more of the cloak and dagger relationship but those walls are breaking down– and rightfully so, there isn’t that much of that mentality today– people recognize they are on the same mission whether they are internal security team or an external team,” he said. “With the PlexTrac platform the red and blue teams have a better view into the other teams’ tactics and techniques – and it makes the whole process into an educational exercise for everyone.”

At its core, PlexTrac makes it easier for security teams to produce their reports — and hence free them up to actually focus on ‘real’ security work. To do so, the service integrates with most of the popular scanners like Qualys, and Veracode, but also tools like ServiceNow and Jira in order to help teams coordinate their workflows. All the data flows into real-time reports that then help teams monitor their security posture. The service also features a dedicated tool, WriteupsDB, for managing reusable write-ups to help teams deliver consistent reports for a variety of audiences.

“Current tools for planning, executing, and reporting on security testing workflows are either nonexistent (manual reporting, spreadsheets, documents, etc…) or exist as largely incomplete features of legacy platforms,” Madrona’s S. Somasegar and Chris Picardo write in today’s announcement. “The pain point for security teams is real and PlexTrac is able to streamline their workflows, save time, and greatly improve output quality. These teams are on the leading edge of attempting to find and exploit vulnerabilities (red teams) and defend and/or eliminate threats (blue teams).”

 

China’s Xpeng in the race to automate EVs with lidar

By Rita Liao

Elon Musk famously said any company relying on lidar is “doomed.” Tesla instead believes automated driving functions are built on visual recognition and is even working to remove the radar. China’s Xpeng begs to differ.

Founded in 2014, Xpeng is one of China’s most celebrated electric vehicle startups and went public when it was just six years old. Like Tesla, Xpeng sees automation as an integral part of its strategy; unlike the American giant, Xpeng uses a combination of radar, cameras, high-precision maps powered by Alibaba, localization systems developed in-house, and most recently, lidar to detect and predict road conditions.

“Lidar will provide the 3D drivable space and precise depth estimation to small moving obstacles even like kids and pets, and obviously, other pedestrians and the motorbikes which are a nightmare for anybody who’s working on driving,” Xinzhou Wu, who oversees Xpeng’s autonomous driving R&D center, said in an interview with TechCrunch.

“On top of that, we have the usual radar which gives you location and speed. Then you have the camera which has very rich, basic semantic information.”

Xpeng is adding lidar to its mass-produced EV model P5, which will begin delivering in the second half of this year. The car, a family sedan, will later be able to drive from point A to B based on a navigation route set by the driver on highways and certain urban roads in China that are covered by Alibaba’s maps. An older model without lidar already enables assisted driving on highways.

The system, called Navigation Guided Pilot, is benchmarked against Tesla’s Navigate On Autopilot, said Wu. It can, for example, automatically change lanes, enter or exit ramps, overtake other vehicles, and maneuver another car’s sudden cut-in, a common sight in China’s complex road conditions.

“The city is super hard compared to the highway but with lidar and precise perception capability, we will have essentially three layers of redundancy for sensing,” said Wu.

By definition, NGP is an advanced driver-assistance system (ADAS) as drivers still need to keep their hands on the wheel and take control at any time (Chinese laws don’t allow drivers to be hands-off on the road). The carmaker’s ambition is to remove the driver, that is, reach Level 4 autonomy two to four years from now, but real-life implementation will hinge on regulations, said Wu.

“But I’m not worried about that too much. I understand the Chinese government is actually the most flexible in terms of technology regulation.”

The lidar camp

Musk’s disdain for lidar stems from the high costs of the remote sensing method that uses lasers. In the early days, a lidar unit spinning on top of a robotaxi could cost as much as $100,000, said Wu.

“Right now, [the cost] is at least two orders low,” said Wu. After 13 years with Qualcomm in the U.S., Wu joined Xpeng in late 2018 to work on automating the company’s electric cars. He currently leads a core autonomous driving R&D team of 500 staff and said the force will double in headcount by the end of this year.

“Our next vehicle is targeting the economy class. I would say it’s mid-range in terms of price,” he said, referring to the firm’s new lidar-powered sedan.

The lidar sensors powering Xpeng come from Livox, a firm touting more affordable lidar and an affiliate of DJI, the Shenzhen-based drone giant. Xpeng’s headquarters is in the adjacent city of Guangzhou about 1.5 hours’ drive away.

Xpeng isn’t the only one embracing lidar. Nio, a Chinese rival to Xpeng targeting a more premium market, unveiled a lidar-powered car in January but the model won’t start production until 2022. Arcfox, a new EV brand of Chinese state-owned carmaker BAIC, recently said it would be launching an electric car equipped with Huawei’s lidar.

Musk recently hinted that Tesla may remove radar from production outright as it inches closer to pure vision based on camera and machine learning. The billionaire founder isn’t particularly a fan of Xpeng, which he alleged owned a copy of Tesla’s old source code.

In 2019, Tesla filed a lawsuit against Cao Guangzhi alleging that the former Tesla engineer stole trade secrets and brought them to Xpeng. XPeng has repeatedly denied any wrongdoing. Cao no longer works at Xpeng.

Supply challenges

While Livox claims to be an independent entity “incubated” by DJI, a source told TechCrunch previously that it is just a “team within DJI” positioned as a separate company. The intention to distance from DJI comes as no one’s surprise as the drone maker is on the U.S. government’s Entity List, which has cut key suppliers off from a multitude of Chinese tech firms including Huawei.

Other critical parts that Xpeng uses include NVIDIA’s Xavier system-on-the-chip computing platform and Bosch’s iBooster brake system. Globally, the ongoing semiconductor shortage is pushing auto executives to ponder over future scenarios where self-driving cars become even more dependent on chips.

Xpeng is well aware of supply chain risks. “Basically, safety is very important,” said Wu. “It’s more than the tension between countries around the world right now. Covid-19 is also creating a lot of issues for some of the suppliers, so having redundancy in the suppliers is some strategy we are looking very closely at.”

Taking on robotaxis

Xpeng could have easily tapped the flurry of autonomous driving solution providers in China, including Pony.ai and WeRide in its backyard Guangzhou. Instead, Xpeng becomes their competitor, working on automation in-house and pledges to outrival the artificial intelligence startups.

“The availability of massive computing for cars at affordable costs and the fast dropping price of lidar is making the two camps really the same,” Wu said of the dynamics between EV makers and robotaxi startups.

“[The robotaxi companies] have to work very hard to find a path to a mass-production vehicle. If they don’t do that, two years from now, they will find the technology is already available in mass production and their value become will become much less than today’s,” he added.

“We know how to mass-produce a technology up to the safety requirement and the quarantine required of the auto industry. This is a super high bar for anybody wanting to survive.”

Xpeng has no plans of going visual-only. Options of automotive technologies like lidar are becoming cheaper and more abundant, so “why do we have to bind our hands right now and say camera only?” Wu asked.

“We have a lot of respect for Elon and his company. We wish them all the best. But we will, as Xiaopeng [founder of Xpeng] said in one of his famous speeches, compete in China and hopefully in the rest of the world as well with different technologies.”

5G, coupled with cloud computing and cabin intelligence, will accelerate Xpeng’s path to achieve full automation, though Wu couldn’t share much detail on how 5G is used. When unmanned driving is viable, Xpeng will explore “a lot of exciting features” that go into a car when the driver’s hands are freed. Xpeng’s electric SUV is already available in Norway, and the company is looking to further expand globally.

FBI launches operation to remove backdoors from hacked Microsoft Exchange servers

By Zack Whittaker

A court in Houston has authorized an FBI operation to “copy and remove” backdoors from hundreds of Microsoft Exchange email servers in the United States, months after hackers used four previously undiscovered vulnerabilities to attack thousands of networks.

The Justice Department announced the operation on Tuesday, which it described as “successful.”

In March, Microsoft discovered a new China state-sponsored hacking group — Hafnium — targeting Exchange servers run from company networks. The four vulnerabilities when chained together allowed the hackers to break into a vulnerable Exchange server and steal its contents. Microsoft fixed the vulnerabilities but the patches did not close the backdoors from the servers that had already been breached. Within days, other hacking groups began hitting vulnerable servers with the same flaws to deploy ransomware.

The number of infected servers dropped as patches were applied. But hundreds of Exchange servers remained vulnerable because the backdoors are difficult to find and eliminate, the Justice Department said in a statement.

“This operation removed one early hacking group’s remaining web shells which could have been used to maintain and escalate persistent, unauthorized access to U.S. networks,” the statement said. “The FBI conducted the removal by issuing a command through the web shell to the server, which was designed to cause the server to delete only the web shell (identified by its unique file path).”

The FBI said it’s attempting to inform owners via email of servers from which it removed the backdoors.

Assistant attorney general John C. Demers said the operation “demonstrates the Department’s commitment to disrupt hacking activity using all of our legal tools, not just prosecutions.”

The Justice Department also said the operation only removed the backdoors, but did not patch the vulnerabilities exploited by the hackers to begin with or remove any malware left behind.

It’s believed this is the first known case of the FBI effectively cleaning up private networks following a cyberattack. In 2016, the Supreme Court moved to allow U.S. judges to issue search and seizure warrants outside of their district. Critics opposed the move at the time, fearing the FBI could ask a friendly court to authorized cyber-operations for anywhere in the world.

Other countries, like France, have used similar powers before to hijack a botnet and remotely shutting it down.

Neither the FBI nor the Justice Department commented by press time.

Risk startup LogicGate confirms data breach

By Zack Whittaker

Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.

An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.

The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.

“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.

LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.

But LogicGate has not made any public statement about the breach. It’s also not clear if LogicGate contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.

We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developers directly to our customers,” he said.

Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.

Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.

In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.


Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

Deeplite raises $6M seed to deploy ML on edge with fewer compute resources

By Ron Miller

One of the issues with deploying a machine learning application is that it tends to be expensive and highly compute intensive.  Deeplite, a startup based in Montreal, wants to change that by providing a way to reduce the overall size of the model, allowing it to run on hardware with far fewer resources.

Today, the company announced a $6 million seed investment. Boston-based venture capital firm PJC led the round with help from Innospark Ventures, Differential Ventures and Smart Global Holdings. Somel Investments, BDC Capital and Desjardins Capital also participated.

Nick Romano, CEO and co-founder at Deeplite, says that the company aims to take complex deep neural networks that require a lot of compute power to run, tend to use up a lot of memory, and can consume batteries at a rapid pace, and help them run more efficiently with fewer resources.

“Our platform can be used to transform those models into a new form factor to be able to deploy it into constrained hardware at the edge,” Romano explained. Those devices could be as small as a cell phone, a drone or even a Raspberry Pi, meaning that developers could deploy AI in ways that just wouldn’t be possible in most cases right now.

The company has created a product called Neutrino that lets you specify how you want to deploy your model and how much you can compress it to reduce the overall size and the resources required to run it in production. The idea is to run a machine learning application on an extremely small footprint.

Davis Sawyer, chief product officer and co-founder, says that the company’s solution comes into play after the model has been built, trained and is ready for production. Users supply the model and the data set and then they can decide how to build a smaller model. That could involve reducing the accuracy a bit if there is a tolerance for that, but chiefly it involves selecting a level of compression — how much smaller you can make the model.

“Compression reduces the size of the model so that you can deploy it on a much cheaper processor. We’re talking in some cases going from 200 megabytes down to on 11 megabytes or from 50 megabytes to 100 kilobytes,” Davis explained.

Rob May, who is leading the investment for PJC, says that he was impressed with the team and the technology the startup is trying to build.

“Deploying AI, particularly deep learning, on resource-constrained devices, is a broad challenge in the industry with scarce AI talent and know-how available. Deeplite’s automated software solution will create significant economic benefit as Edge AI continues to grow as a major computing paradigm,” May said in a statement.

The idea for the company has roots in the TandemLaunch incubator in Montreal. It launched officially as a company in mid-2019 and today has 15 employees with plans to double that by the end of this year. As it builds the company, Romano says the founders are focused on building a diverse and inclusive organization.

“We’ve got a strategy that’s going to find us the right people, but do it in a way that is absolutely diverse and inclusive. That’s all part of the DNA of the organization,” he said.

When it’s possible to return to work, the plan is to have offices in Montreal and Toronto that act as hubs for employees, but there won’t be any requirement to come into the office.

“We’ve already discussed that the general approach is going to be that people can come and go as they please, and we don’t think we will need as large an office footprint as we may have had in the past. People will have the option to work remotely and virtually as they see fit,” Romano said.

Meroxa raises $15M Series A for its real-time data platform

By Frederic Lardinois

Meroxa, a startup that makes it easier for businesses to build the data pipelines to power both their analytics and operational workflows, today announced that it has raised a $15 million Series A funding round led by Drive Capital. Existing investors Root, Amplify and Hustle Fund also participated in this round, which together with the company’s previously undisclosed $4.2 million seed round now brings total funding in the company to $19.2 million.

The promise of Meroxa is that can use a single platform for their various data needs and won’t need a team of experts to build their infrastructure and then manage it. At its core, Meroxa provides a single Software-as-a-Service solution that connects relational databases to data warehouses and then helps businesses operationalize that data.

Image Credits: Meroxa

“The interesting thing is that we are focusing squarely on relational and NoSQL databases into data warehouse,” Meroxa co-founder and CEO DeVaris Brown told me. “Honestly, people come to us as a real-time FiveTran or real-time data warehouse sink. Because, you know, the industry has moved to this [extract, load, transform] format. But the beautiful part about us is, because we do change data capture, we get that granular data as it happens.” And businesses want this very granular data to be reflected inside of their data warehouses, Brown noted, but he also stressed that Meroxa can expose this stream of data as an API endpoint or point it to a Webhook.

The company is able to do this because its core architecture is somewhat different from other data pipeline and integration services that, at first glance, seem to offer a similar solution. Because of this, users can use the service to connect different tools to their data warehouse but also build real-time tools on top of these data streams.

Image Credits: Meroxa

“We aren’t a point-to-point solution,” Meroxa co-founder and CTO Ali Hamidi explained. “When you set up the connection, you aren’t taking data from Postgres and only putting it into Snowflake. What’s really happening is that it’s going into our intermediate stream. Once it’s in that stream, you can then start hanging off connectors and say, ‘Okay, well, I also want to peek into the stream, I want to transfer my data, I want to filter out some things, I want to put it into S3.”

Because of this, users can use the service to connect different tools to their data warehouse but also build real-time tools to utilize the real-time data stream. With this flexibility, Hamidi noted, a lot of the company’s customers start with a pretty standard use case and then quickly expand into other areas as well.

Brown and Hamidi met during their time at Heroku, where Brown was a director of product management and Hamidi a lead software engineer. But while Heroku made it very easy for developers to publish their web apps, there wasn’t anything comparable in the highly fragmented database space. The team acknowledges that there are a lot of tools that aim to solve these data problems, but few of them focus on the user experience.

Image Credits: Meroxa

“When we talk to customers now, it’s still very much an unsolved problem,” Hamidi said. “It seems kind of insane to me that this is such a common thing and there is no ‘oh, of course you use this tool because it addresses all my problems.’ And so the angle that we’re taking is that we see user experience not as a nice-to-have, it’s really an enabler, it is something that enables a software engineer or someone who isn’t a data engineer with 10 years of experience in wrangling Kafka and Postgres and all these things. […] That’s a transformative kind of change.”

It’s worth noting that Meroxa uses a lot of open-source tools but the company has also committed to open-sourcing everything in its data plane as well. “This has multiple wins for us, but one of the biggest incentives is in terms of the customer, we’re really committed to having our agenda aligned. Because if we don’t do well, we don’t serve the customer. If we do a crappy job, they can just keep all of those components and run it themselves,” Hamidi explained.

Today, Meroxa, which the team founded in early 2020, has over 24 employees (and is 100% remote). “I really think we’re building one of the most talented and most inclusive teams possible,” Brown told me. “Inclusion and diversity are very, very high on our radar. Our team is 50% black and brown. Over 40% are women. Our management team is 90% underrepresented. So not only are we building a great product, we’re building a great company, we’re building a great business.”  

IonQ now supports IBM’s Qiskit quantum development kit

By Frederic Lardinois

IonQ, the trapped ion quantum computing company that recently went public via a SPAC, today announced that it is integrating its quantum computing platform with the open-source Qiskit software development kit. This means Qiskit users can now bring their programs to IonQ’s platform without any major modifications to their code.

At first glance, that seems relatively unremarkable, but it’s worth noting that Qiskit was founded by IBM Research and is IBM’s default tool for working with its quantum computers. There is a healthy bit of competition between IBM and IonQ (and, to be fair, many others in this space), in part because both are betting on very different technologies at the core of their platforms. While IonQ is betting on trapped ions, which allows its machines able to run at room temperature, IBM’s technique requires its machine to be supercooled.

IonQ has now released a new provider library for Qiskit that is available as part of the Qiskit Partner repository on GitHub and via the Python Package Index.

“IonQ is excited to make our quantum computers and APIs easily accessible to the Qiskit community,” said IonQ CEO & President Peter Chapman. “Open source has already revolutionized traditional software development. With this integration, we’re bringing the world one step closer to the first generation of widely-applicable quantum applications.”

On the one hand, it’s hard not to look at this as IonQ needling IBM a bit, but it’s also an acknowledgment that Qiskit has become somewhat of a standard for developers who want to work with quantum computers. But putting these rivalries aside, we’re also in the early days of quantum computing and with no clear leader yet, anything that makes these various platforms more interoperable is a win for developers who want to dip their feet into writing for them.

Apple said to be developing Apple TV/HomePod combo and iPad-like smart speaker display

By Darrell Etherington

Apple is reportedly working on a couple of new options for a renewed entry into the smart home, including a mash-up of the Apple TV with a HomePod speaker, and an integrated camera for video chat, according to Bloomberg. It’s also said to be working on a smart speaker that basically combines a HomePod with an iPad, providing something similar to Amazon’s Echo Show or Google’s Nest Hub in functionality.

The Apple TV/HomePod hybrid would still connect to a television for outputting video, and would offer similar access to all the video and gaming services that the current Apple TV does, while the speaker component would provide sound output, music playback and Siri integration. It would also include a built-in camera for using video conferencing apps on the TV itself, the report says.

That second device would be much more like existing smart assistant display devices on the market today, with an iPad-like screen providing integrated visuals. The project could involve attaching the iPad via a “robotic arm,” according to Bloomberg, that would allow it to move to accommodate a user moving around, with the ability to keep them in frame during video chat sessions.

Bloomberg doesn’t provide any specific timelines for release of any of these potential products, and it sounds like they’re still very much in the development phase, which means Apple could easily abandon these plans depending on its evaluation of their potential. Apple just recently discontinued its original HomePod, the $300 smart speaker it debuted in 2018.

Rumors abound about a refreshed Apple TV arriving sometime this year, which should boast a faster processor and also an updated remote control. It could bring other hardware improvements, like support for a faster 120Hz refresh rate available on more modern TVs.

APKPure app contained malicious adware, say researchers

By Zack Whittaker

Security researchers say APKPure, a widely popular app for installing older or discontinued Android apps from outside of Google’s app store, contained malicious adware that flooded the victim’s device with unwanted ads.

Kaspersky Lab said that it alerted APKPure on Thursday that its most recent app version, 3.17.18, contained malicious code that siphoned off data from a victim’s device without their knowledge, and pushed ads to the device’s lock screen and in the background to generate fraudulent revenue for the adware operators.

But the researchers said that the malicious code had the capacity to download other malware, potentially putting affected victims at further risk.

The researchers said the APKPure developers likely introduced the malicious code, known as a software development kit or SDK, from an unverified source. APKPure removed the malicious code and pushed out a new version, 3.17.19, and the developers no longer list the malicious version on its site.

APKPure was set up in 2014 to allow Android users access to a vast bank of Android apps and games, including old versions, as well as app versions from other regions that are no longer on Android’s official app store Google Play. It later launched an Android app, which also has to be installed outside Google Play, serving as its own app store to allow users to download older apps directly to their Android devices.

APKPure is ranked as one of the most popular sites on the internet.

But security experts have long warned against installing apps outside of the official app stores as quality and security vary wildly as much of the Android malware requires victims to install malicious apps from outside the app store. Google scans all Android apps that make it into Google Play, but some have slipped through the cracks before.

TechCrunch contacted APKPure for comment but did not hear back.

NLPCloud.io helps devs add language processing smarts to their apps

By Natasha Lomas

While visual ‘no code‘ tools are helping businesses get more out of computing without the need for armies of in-house techies to configure software on behalf of other staff, access to the most powerful tech tools — at the ‘deep tech’ AI coal face — still requires some expert help (and/or costly in-house expertise).

This is where bootstrapping French startup, NLPCloud.io, is plying a trade in MLOps/AIOps — or ‘compute platform as a service’ (being as it runs the queries on its own servers) — with a focus on natural language processing (NLP), as its name suggests.

Developments in artificial intelligence have, in recent years, led to impressive advances in the field of NLP — a technology that can help businesses scale their capacity to intelligently grapple with all sorts of communications by automating tasks like Named Entity Recognition, sentiment-analysis, text classification, summarization, question answering, and Part-Of-Speech tagging, freeing up (human) staff to focus on more complex/nuanced work. (Although it’s worth emphasizing that the bulk of NLP research has focused on the English language — meaning that’s where this tech is most mature; so associated AI advances are not universally distributed.)

Production ready (pre-trained) NLP models for English are readily available ‘out of the box’. There are also dedicated open source frameworks offering help with training models. But businesses wanting to tap into NLP still need to have the DevOps resource and chops to implement NLP models.

NLPCloud.io is catering to businesses that don’t feel up to the implementation challenge themselves — offering “production-ready NLP API” with the promise of “no DevOps required”.

Its API is based on Hugging Face and spaCy open-source models. Customers can either choose to use ready-to-use pre-trained models (it selects the “best” open source models; it does not build its own); or they can upload custom models developed internally by their own data scientists — which it says is a point of differentiation vs SaaS services such as Google Natural Language (which uses Google’s ML models) or Amazon Comprehend and Monkey Learn.

NLPCloud.io says it wants to democratize NLP by helping developers and data scientists deliver these projects “in no time and at a fair price”. (It has a tiered pricing model based on requests per minute, which starts at $39pm and ranges up to $1,199pm, at the enterprise end, for one custom model running on a GPU. It does also offer a free tier so users can test models at low request velocity without incurring a charge.)

“The idea came from the fact that, as a software engineer, I saw many AI projects fail because of the deployment to production phase,” says sole founder and CTO Julien Salinas. “Companies often focus on building accurate and fast AI models but today more and more excellent open-source models are available and are doing an excellent job… so the toughest challenge now is being able to efficiently use these models in production. It takes AI skills, DevOps skills, programming skill… which is why it’s a challenge for so many companies, and which is why I decided to launch NLPCloud.io.”

The platform launched in January 2021 and now has around 500 users, including 30 who are paying for the service. While the startup, which is based in Grenoble, in the French Alps, is a team of three for now, plus a couple of independent contractors. (Salinas says he plans to hire five people by the end of the year.)

“Most of our users are tech startups but we also start having a couple of bigger companies,” he tells TechCrunch. “The biggest demand I’m seeing is both from software engineers and data scientists. Sometimes it’s from teams who have data science skills but don’t have DevOps skills (or don’t want to spend time on this). Sometimes it’s from tech teams who want to leverage NLP out-of-the-box without hiring a whole data science team.”

“We have very diverse customers, from solo startup founders to bigger companies like BBVA, Mintel, Senuto… in all sorts of sectors (banking, public relations, market research),” he adds.

Use cases of its customers include lead generation from unstructured text (such as web pages), via named entities extraction; and sorting support tickets based on urgency by conducting sentiment analysis.

Content marketers are also using its platform for headline generation (via summarization). While text classification capabilities are being used for economic intelligence and financial data extraction, per Salinas.

He says his own experience as a CTO and software engineer working on NLP projects at a number of tech companies led him to spot an opportunity in the challenge of AI implementation.

“I realized that it was quite easy to build acceptable NLP models thanks to great open-source frameworks like spaCy and Hugging Face Transformers but then I found it quite hard to use these models in production,” he explains. “It takes programming skills in order to develop an API, strong DevOps skills in order to build a robust and fast infrastructure to serve NLP models (AI models in general consume a lot of resources), and also data science skills of course.

“I tried to look for ready-to-use cloud solutions in order to save weeks of work but I couldn’t find anything satisfactory. My intuition was that such a platform would help tech teams save a lot of time, sometimes months of work for the teams who don’t have strong DevOps profiles.”

“NLP has been around for decades but until recently it took whole teams of data scientists to build acceptable NLP models. For a couple of years, we’ve made amazing progress in terms of accuracy and speed of the NLP models. More and more experts who have been working in the NLP field for decades agree that NLP is becoming a ‘commodity’,” he goes on. “Frameworks like spaCy make it extremely simple for developers to leverage NLP models without having advanced data science knowledge. And Hugging Face’s open-source repository for NLP models is also a great step in this direction.

“But having these models run in production is still hard, and maybe even harder than before as these brand new models are very demanding in terms of resources.”

The models NLPCloud.io offers are picked for performance — where “best” means it has “the best compromise between accuracy and speed”. Salinas also says they are paying mind to context, given NLP can be used for diverse user cases — hence proposing number of models so as to be able to adapt to a given use.

“Initially we started with models dedicated to entities extraction only but most of our first customers also asked for other use cases too, so we started adding other models,” he notes, adding that they will continue to add more models from the two chosen frameworks — “in order to cover more use cases, and more languages”.

SpaCy and Hugging Face, meanwhile, were chosen to be the source for the models offered via its API based on their track record as companies, the NLP libraries they offer and their focus on production-ready framework — with the combination allowing NLPCloud.io to offer a selection of models that are fast and accurate, working within the bounds of respective trade-offs, according to Salinas.

“SpaCy is developed by a solid company in Germany called Explosion.ai. This library has become one of the most used NLP libraries among companies who want to leverage NLP in production ‘for real’ (as opposed to academic research only). The reason is that it is very fast, has great accuracy in most scenarios, and is an opinionated” framework which makes it very simple to use by non-data scientists (the tradeoff is that it gives less customization possibilities),” he says.

Hugging Face is an even more solid company that recently raised $40M for a good reason: They created a disruptive NLP library called ‘transformers’ that improves a lot the accuracy of NLP models (the tradeoff is that it is very resource intensive though). It gives the opportunity to cover more use cases like sentiment analysis, classification, summarization… In addition to that, they created an open-source repository where it is easy to select the best model you need for your use case.”

While AI is advancing at a clip within certain tracks — such as NLP for English — there are still caveats and potential pitfalls attached to automating language processing and analysis, with the risk of getting stuff wrong or worse. AI models trained on human-generated data have, for example, been shown reflecting embedded biases and prejudices of the people who produced the underlying data.

Salinas agrees NLP can sometimes face “concerning bias issues”, such as racism and misogyny. But he expresses confidence in the models they’ve selected.

“Most of the time it seems [bias in NLP] is due to the underlying data used to trained the models. It shows we should be more careful about the origin of this data,” he says. “In my opinion the best solution in order to mitigate this is that the community of NLP users should actively report something inappropriate when using a specific model so that this model can be paused and fixed.”

“Even if we doubt that such a bias exists in the models we’re proposing, we do encourage our users to report such problems to us so we can take measures,” he adds.

 

Facebook ran ads for a fake ‘Clubhouse for PC’ app planted with malware

By Zack Whittaker

Cybercriminals have taken out a number of Facebook ads masquerading as a Clubhouse app for PC users in order to target unsuspecting victims with malware, TechCrunch has learned.

TechCrunch was alerted Wednesday to Facebook ads tied to several Facebook pages impersonating Clubhouse, the drop-in audio chat app only available on iPhones. Clicking on the ad would open a fake Clubhouse website, including a mocked-up screenshot of what the non-existent PC app looks like, with a download link to the malicious app.

When opened, the malicious app tries to communicate with a command and control server to obtain instructions on what to do next. One sandbox analysis of the malware showed the malicious app tried to infect the isolated machine with ransomware.

But overnight, the fake Clubhouse websites — which were hosted in Russia — went offline. In doing so, the malware also stopped working. Guardicore’s Amit Serper, who tested the malware in a sandbox on Thursday, said the malware received an error from the server and did nothing more.

The fake website was set up to look like Clubhouse’s real website, but featuring a malicious PC app. (Image: TechCrunch)

It’s not uncommon for cybercriminals to tailor their malware campaigns to piggyback off the successes of wildly popular apps. Clubhouse reportedly topped more than 8 million global downloads to date despite an invite-only launch. That high demand prompted a scramble to reverse-engineer the app to build bootleg versions of it to evade Clubhouse’s gated walls, but also government censors where the app is blocked.

Each of the Facebook pages impersonating Clubhouse only had a handful of likes, but were still active at the time of publication. When reached, Facebook wouldn’t say how many account owners had clicked on the ads pointing to the fake Clubhouse websites.

At least nine ads were placed this week between Tuesday and Thursday. Several of the ads said Clubhouse “is now available for PC,” while another featured a photo of co-founders Paul Davidson and Rohan Seth. Clubhouse did not return a request for comment.

The ads have been removed from Facebook’s Ad Library, but we have published a copy. It’s also not clear how the ads made it through Facebook’s processes in the first place.

Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

By Jonathan Shieber

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project off the coast of Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.

Education nonprofit Edraak ignored a student data leak for two months

By Zack Whittaker

Edraak, an online education nonprofit, exposed the private information of thousands of students after uploading student data to an unprotected cloud storage server, apparently by mistake.

The non-profit, founded by Jordan’s Queen Rania and headquartered in the kingdom’s capital, was set up in 2013 to promote education across the Arab region. The organization works with several partners, including the British Council and edX, a consortium set up by Harvard, Stanford, and MIT.

In February, researchers at U.K. cybersecurity firm TurgenSec found one of Edraak’s cloud storage servers containing at least tens of thousands of students’ data, including spreadsheets with students’ names, email addresses, gender, birth year, country of nationality, and some class grades.

TurgenSec, which runs Breaches.UK, a site for disclosing security incidents, alerted Edraak to the security lapse. A week later, their email was acknowledged by the organization but the data continued to spill. Emails seen by TechCrunch show the researchers tried to alert others who worked at the organization via LinkedIn requests, and its partners, including the British Council.

Two months passed and the server remained open. At its request, TechCrunch contacted Edraak, which closed the servers a few hours later.

In an email this week, Edraak chief executive Sherif Halawa told TechCrunch that the storage server was “meant to be publicly accessible, and to host public course content assets, such as course images, videos, and educational files,” but that “student data is never intentionally placed in this bucket.”

“Due to an unfortunate configuration bug, however, some academic data and student information exports were accidentally placed in the bucket,” Halawa confirmed.

“Unfortunately our initial scan did not locate the misplaced data that made it there accidentally. We attributed the elements in the Breaches.UK email to regular student uploads. We have now located these misplaced reports today and addressed the issue,” Halawa said.

The server is now closed off to public access.

It’s not clear why Edraak ignored the researchers’ initial email, which disclosed the location of the unprotected server, or why the organization’s response was not to ask for more details. When reached, British Council spokesperson Catherine Bowden said the organization received an email from TurgenSec but mistook it for a phishing email.

Edraak’s CEO Halawa said that the organization had already begun notifying affected students about the incident, and put out a blog post on Thursday.

Last year, TurgenSec found an unencrypted customer database belonging to U.K. internet provider Virgin Media that was left online by mistake, containing records linking some customers to adult and explicit websites.

More from TechCrunch:


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

❌