FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Computer vision inches toward ‘common sense’ with Facebook’s latest research

By Devin Coldewey

Machine learning is capable of doing all sorts of things as long as you have the data to teach it how. That’s not always easy, and researchers are always looking for a way to add a bit of “common sense” to AI so you don’t have to show it 500 pictures of a cat before it gets it. Facebook’s newest research takes a big step toward reducing the data bottleneck.

The company’s formidable AI research division has been working for years now on how to advance and scale things like advanced computer vision algorithms, and has made steady progress, generally shared with the rest of the research community. One interesting development Facebook has pursued in particular is what’s called “semi-supervised learning.”

Generally when you think of training an AI, you think of something like the aforementioned 500 pictures of cats — images that have been selected and labeled (which can mean outlining the cat, putting a box around the cat or just saying there’s a cat in there somewhere) so that the machine learning system can put together an algorithm to automate the process of cat recognition. Naturally if you want to do dogs or horses, you need 500 dog pictures, 500 horse pictures, etc. — it scales linearly, which is a word you never want to see in tech.

Semi-supervised learning, related to “unsupervised” learning, involves figuring out important parts of a data set without any labeled data at all. It doesn’t just go wild, there’s still structure; for instance, imagine you give the system a thousand sentences to study, then showed it 10 more that have several of the words missing. The system could probably do a decent job filling in the blanks just based on what it’s seen in the previous thousand. But that’s not so easy to do with images and video — they aren’t as straightforward or predictable.

But Facebook researchers have shown that while it may not be easy, it’s possible and in fact very effective. The DINO system (which stands rather unconvincingly for “DIstillation of knowledge with NO labels”) is capable of learning to find objects of interest in videos of people, animals and objects quite well without any labeled data whatsoever.

Animation showing four videos and the AI interpretation of the objects in them.

Image Credits: Facebook

It does this by considering the video not as a sequence of images to be analyzed one by one in order, but as a complex, interrelated set, like the difference between “a series of words” and “a sentence.” By attending to the middle and the end of the video as well as the beginning, the agent can get a sense of things like “an object with this general shape goes from left to right.” That information feeds into other knowledge, like when an object on the right overlaps with the first one, the system knows they’re not the same thing, just touching in those frames. And that knowledge in turn can be applied to other situations. In other words, it develops a basic sense of visual meaning, and does so with remarkably little training on new objects.

This results in a computer vision system that’s not only effective — it performs well compared with traditionally trained systems — but more relatable and explainable. For instance, while an AI that has been trained with 500 dog pictures and 500 cat pictures will recognize both, it won’t really have any idea that they’re similar in any way. But DINO — although it couldn’t be specific — gets that they’re similar visually to one another, more so anyway than they are to cars, and that metadata and context is visible in its memory. Dogs and cats are “closer” in its sort of digital cognitive space than dogs and mountains. You can see those concepts as little blobs here — see how those of a type stick together:

Animated diagram showing how concepts in the machine learning model stay close together.

Image Credits: Facebook

This has its own benefits, of a technical sort we won’t get into here. If you’re curious, there’s more detail in the papers linked in Facebook’s blog post.

There’s also an adjacent research project, a training method called PAWS, which further reduces the need for labeled data. PAWS combines some of the ideas of semi-supervised learning with the more traditional supervised method, essentially giving the training a boost by letting it learn from both the labeled and unlabeled data.

Facebook of course needs good and fast image analysis for its many user-facing (and secret) image-related products, but these general advances to the computer vision world will no doubt be welcomed by the developer community for other purposes.

 

Click Studios asks customers to stop tweeting about its Passwordstate data breach

By Zack Whittaker

Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.

Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.

In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.

But news of the breach only became public after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.

Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.

In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”

“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.

Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.

It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.

Click Studios chief executive Mark Sandford has not responded to repeated requests (from TechCrunch) for comment. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”

TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.

SLAIT’s real-time sign language translation promises more accessible online communication

By Devin Coldewey

Sign language is used by millions of people around the world, but unlike Spanish, Mandarin or even Latin, there’s no automatic translation available for those who can’t use it. SLAIT claims the first such tool available for general use, which can translate around 200 words and simple sentences to start — using nothing but an ordinary computer and webcam.

People with hearing impairments, or other conditions that make vocal speech difficult, number in the hundreds of millions, rely on the same common tech tools as the hearing population. But while emails and text chat are useful and of course very common now, they aren’t a replacement for face-to-face communication, and unfortunately there’s no easy way for signing to be turned into written or spoken words, so this remains a significant barrier.

We’ve seen attempts at automatic sign language (usually American/ASL) translation for years and years. In 2012 Microsoft awarded its Imagine Cup to a student team that tracked hand movements with gloves; in 2018 I wrote about SignAll, which has been working on a sign language translation booth using multiple cameras to give 3D positioning; and in 2019 I noted that a new hand-tracking algorithm called MediaPipe, from Google’s AI labs, could lead to advances in sign detection. Turns out that’s more or less exactly what happened.

SLAIT is a startup built out of research done at the Aachen University of Applied Sciences in Germany, where co-founder Antonio Domènech built a small ASL recognition engine using MediaPipe and custom neural networks. Having proved the basic notion, Domènech was joined by co-founders Evgeny Fomin and William Vicars to start the company; they then moved on to building a system that could recognize first 100, and now 200 individual ASL gestures and some simple sentences. The translation occurs offline, and in near real time on any relatively recent phone or computer.

Animation showing ASL signs being translated to text, and spoken words being transcribed to text back.

Image Credits: SLAIT

They plan to make it available for educational and development work, expanding their dataset so they can improve the model before attempting any more significant consumer applications.

Of course, the development of the current model was not at all simple, though it was achieved in remarkably little time by a small team. MediaPipe offered an effective, open-source method for tracking hand and finger positions, sure, but the crucial component for any strong machine learning model is data, in this case video data (since it would be interpreting video) of ASL in use — and there simply isn’t a lot of that available.

As they recently explained in a presentation for the DeafIT conference, the first team evaluated using an older Microsoft database, but found that a newer Australian academic database had more and better quality data, allowing for the creation of a model that is 92 percent accurate at identifying any of 200 signs in real time. They have augmented this with sign language videos from social media (with permission, of course) and government speeches that have sign language interpreters — but they still need more.

Animated image of a woman saying "deaf understand hearing" in ASL.

A GIF showing one of the prototypes in action — the consumer product won’t have a wireframe, obviously. Image Credits: SLAIT

Their intention is to make the platform available to the deaf and ASL learner communities, who hopefully won’t mind their use of the system being turned to its improvement.

And naturally it could prove an invaluable tool in its present state, since the company’s translation model, even as a work in progress, is still potentially transformative for many people. With the amount of video calls going on these days and likely for the rest of eternity, accessibility is being left behind — only some platforms offer automatic captioning, transcription, summaries, and certainly none recognize sign language. But with SLAIT’s tool people could sign normally and participate in a video call naturally rather than using the neglected chat function.

“In the short term, we’ve proven that 200 word models are accessible and our results are getting better every day,” said SLAIT’s Evgeny Fomin. “In the medium term, we plan to release a consumer facing app to track sign language. However, there is a lot of work to do to reach a comprehensive library of all sign language gestures. We are committed to making this future state a reality. Our mission is to radically improve accessibility for the Deaf and hard of hearing communities.”

From left, Evgeny Fomin, Antonio Domènech and Bill Vicars. Image Credits: SLAIT

He cautioned that it will not be totally complete — just as translation and transcription in or to any language is only an approximation, the point is to provide practical results for millions of people, and a few hundred words goes a long way toward doing so. As data pours in, new words can be added to the vocabulary, and new multigesture phrases as well, and performance for the core set will improve.

Right now the company is seeking initial funding to get its prototype out and grow the team beyond the founding crew. Fomin said they have received some interest but want to make sure they connect with an investor who really understands the plan and vision.

When the engine itself has been built up to be more reliable by the addition of more data and the refining of the machine learning models, the team will look into further development and integration of the app with other products and services. For now the product is more of a proof of concept, but what a proof it is — with a bit more work SLAIT will have leapfrogged the industry and provided something that deaf and hearing people both have been wanting for decades.

There is no cybersecurity skills gap, but CISOs must think creatively

By Annie Siebert
Lamont Orange Contributor
Lamont Orange is Netskope’s chief information security officer. He has more than 20 years of experience in the information security industry, having previously served as vice president of enterprise security for Charter Communications (now Spectrum) and as senior manager for the security and technology services practice at Ernst & Young.

Those of us who read a lot of tech and business publications have heard for years about the cybersecurity skills gap. Studies often claim that millions of jobs are going unfilled because there aren’t enough qualified candidates available for hire.

I don’t buy it.

The basic laws of supply and demand mean there will always be people in the workforce willing to move into well-paid security jobs. The problem is not that these folks don’t exist. It’s that CIOs or CISOs typically look right past them if their resumes don’t have a very specific list of qualifications.

In many cases, hiring managers expect applicants to be fully trained on all the technologies their organization currently uses. That not only makes it harder to find qualified candidates, but it also reduces the diversity of experience within security teams — which, ultimately, may weaken the company’s security capabilities and its talent pool.

At Netskope, we take a different approach to staffing for security roles. We know we can teach the cybersecurity skills needed to do the job, so instead, there are two traits we consider more important than specific technical expertise: One is a hunger to learn more about security, which suggests the individual will take the initiative to continuously improve their skills. The other is possession of a skill set that no one else on our security team has.

Overemphasis on technical skills creates an artificial talent shortage

To understand why I believe our approach has helped us build a stronger security team, think about the long-term benefits of hiring someone with a specific security skill set: How valuable will that exact knowledge be in several years? Probably not very.

The problem is not that these folks don’t exist. It’s that CIOs or CISOs typically look right past them if their resumes don’t have a very specific list of qualifications.

Even the most basic security technologies are incredibly dynamic. In most companies, the IT infrastructure is currently in the midst of a massive transition from on-premises to cloud-based systems. Security teams are having to learn new technologies. More than that, they are having to adopt an entirely new mindset, shifting from a focus on protecting specific pieces of hardware to a focus on protecting individuals and applications as their workloads increasingly move outside the corporate network.

Interview: Apple executives on the 2021 iPad Pro, stunting with the M1 and creating headroom

By Matthew Panzarino

When the third minute of Apple’s first product event of 2021 ticked over and they had already made 3 announcements we knew it was going to be a packed one. In a tight single hour this week, Apple launched a ton of new product including AirTags, new Apple Card family sharing, a new Apple TV, a new set of colorful iMacs, and a purple iPhone 12 shade.

Of the new devices announced, though, Apple’s new 12.9” iPad Pro is the most interesting from a market positioning perspective. 

This week I got a chance to speak to Apple Senior Vice President of Worldwide Marketing Greg Joswiak and Senior Vice President of Hardware Engineering John Ternus about this latest version of the iPad Pro and its place in the working universe of computing professionals. 

In many ways, this new iPad Pro is the equivalent of a sprinter being 3 lengths ahead going into the last lap and just turning on the afterburners to put a undebatable distance between themselves and the rest of the pack. Last year’s model is still one of the best computers you can buy, with a densely packed offering of powerful computing tools, battery performance and portability. And this year gets upgrades in the M1 processor, RAM, storage speed, Thunderbolt connection, 5G radio, new ultra wide front camera and its Liquid Retina XDR display. 

This is a major bump even while the 2020 iPad Pro still dominates the field. And at the center of that is the display.

Apple has essentially ported its enormously good $5,000 Pro Display XDR down to a 12.9” touch version, with some slight improvements. But the specs are flat out incredible. 1,000 nit brightness peaking at 1,600 nits in HDR with 2,500 full array local dimming zones — compared to the Pro Display XDR’s 576 in a much larger scale.

Given that this year’s first product launch from Apple was virtual, the media again got no immediate hands on with the new devices introduced, including iPad Pro. This means that I have not yet seen the XDR display in action. Unfortunately, these specs are so good that estimating them without having seen the screen yet is akin to trying to visualize “a trillion” in your head. It’s intellectually possible but not really practical. 

It’s brighter than any Mac or iOS device not the market and could be a big game changing device for professionals working in HDR video and photography. But even still, this is a major investment to ship a micro-LED display in the millions or tens of millions of units with more density and brightness than any other display on the market. 

I ask both of them why there’s a need to do this doubling down on what is already one of the best portable displays ever made — if not one of the best displays period. 

“We’ve always tried to have the best display,” says Ternus. “We’re going from the best display on any device like this and making it even better, because that’s what we do and that’s why we, we love coming to work every day is to take that next big step.

“[With the] Pro Display XDR if you remember one thing we talked about was being able to have this display and this capability in more places in the work stream. Because traditionally there was just this one super expensive reference monitor at the end of the line. This is like the next extreme of that now you don’t even have to be in the studio anymore you can take it with you on the go and you can have that capability so from a, from a creative pro standpoint we think this is going to be huge.”

In my use of the Pro Display and my conversations with professionals about it one of the the common themes that I’ve heard is the reduction in overall workload due to the multiple points in the flow where color and image can be managed accurately to spec now. The general system in place puts a reference monitor very late in the production stage which can often lead to expensive and time consuming re-rendering or new color passes. Adding the Liquid Retina XDR display into the mix at an extremely low price point means that a lot more plot points on the production line suddenly get a lot closer to the right curve. 

One of the stronger answers on the ‘why the aggressive spec bump’ question comes later in our discussion but is worth mentioning in this context. The point, Joswiak says, is to offer headroom. Headroom for users and headroom for developers. 

“One of the things that iPad Pro has done as John [Ternus] has talked about is push the envelope. And by pushing the envelope that has created this space for developers to come in and fill it. When we created the very first iPad Pro, there was no Photoshop,” Joswiak notes. “There was no creative apps that could immediately use it. But now there’s so many you can’t count. Because we created that capability, we created that performance — and, by the way sold a fairly massive number of them — which is a pretty good combination for developers to then come in and say, I can take advantage of that. There’s enough customers here and there’s enough performance. I know how to use that. And that’s the same thing we do with each generation. We create more headroom to performance that developers will figure out how to use.

“The customer is in a great spot because they know they’re buying something that’s got some headroom and developers love it.”

The iPad Pro is now powered by the M1 chip — a move away from the A-series naming. And that processor part is identical (given similar memory configurations) to the one found in the iMac announced this week and MacBooks launched earlier this year.

“It’s the same part, it’s M1,” says Ternus. “iPad Pro has always had the best Apple silicon we make.”

“How crazy is it that you can take a chip that’s in a desktop, and drop it into an iPad,” says Joswiak. “I mean it’s just incredible to have that kind of performance at such amazing power efficiency. And then have all the technologies that come with it. To have the neural engine and ISP and Thunderbolt and all these amazing things that come with it, it’s just miles beyond what anybody else is doing.”

As the M1 was rolling out and I began running my testing, the power per watt aspects really became the story. That really is the big differentiator for M1. For decades, laptop users have been accustomed to saving any heavy or intense workloads for the times when their machines were plugged in due to power consumption. M1 is in the process of resetting those expectations for desktop class processors. In fact, Apple is offering not only the most powerful CPUs but also the most power-efficient CPUs on the market. And it’s doing it in a $700 Mac Mini, a $1,700 iMac and a $1,100 iPad Pro at the same time. It’s a pretty ridiculous display of stunting, but it’s also the product of more than a decade of work building its own architecture and silicon.

“Your battery life is defined by the capacity of your battery and the efficiency of your system right? So we’re always pushing really really hard on the system efficiency and obviously with M1, the team’s done a tremendous job with that. But the display as well. We designed a new mini LED for this display, focusing on efficiency and on package size, obviously, to really to be able to make sure that it could fit into the iPad experience with the iPad experience’s good battery life. 

We weren’t going to compromise on that,” says Ternus.

One of the marquee features of the new iPad Pro is its 12MP ultra-wide camera with Center Stage. An auto-centering and cropping video feature designed to make FaceTime calling more human-centric, literally. It finds humans in the frame and centers their faces, keeping them in the frame even if they move, standing and stretching or leaning to the side. It also includes additional people in the frame automatically if they enter the range of the new ultra-wide 12MP front-facing camera. And yes, it also works with other apps like Zoom and Webex and there will be an API for it.

I’ve gotten to see it in action a bit more and I can say with surety that this will become an industry standard implementation of this kind of subject focusing. The crop mechanic is handled with taste, taking on the characteristics of a smooth zoom pulled by a steady hand rather than an abrupt cut to a smaller, closer framing. It really is like watching a TV show directed by an invisible machine learning engine. 

“This is one of the examples of some of our favorite stuff to do because of the way it marries the hardware and software right,” Ternus says. “So, sure it’s the camera but it’s also the SOC and and the algorithms associated with detecting the person and panning and zooming. There’s the kind of the taste aspect right which is how do we make something that feels good it doesn’t move too fast and doesn’t move too slow. That’s a lot of talented, creative people coming together and trying to find the thing that makes it Apple like.”

It also goes a long way to making the awkward horizontal camera placement when using the iPad Pro with Magic Keyboard. This has been a big drawback for using the iPad Pro as a portable video conferencing tool, something we’ve all been doing a lot of lately. I ask Ternus whether Center Stage was designed to mitigate this placement.

“Well, you can use iPad in any orientation right? So you’re going to have different experiences based on how you’re using it. But what’s amazing about this is that we can keep correcting the frame. What’s been really cool is that we’ve all been sitting around in these meetings all day long on video conferencing and it’s just nice to get up. This experience of just being able to stand up and kind of stretch and move around the room without walking away from the camera has been just absolutely game changing, it’s really cool.”

It’s worth noting that several other video sharing devices like the Portal and some video software like Teams already offer cropping-type follow features, but the user experience is everything when you’re shipping software like this to millions of people at once. It will be interesting to see how Center Stage stacks up agains the competition when we see it live. 

With the ongoing chatter about how the iPad Pro and Mac are converging from a feature-set perspective, I ask how they would you characterize an iPad Pro vs. a MacBook buyer? Joswiak is quick to respond to this one. 

“This is my favorite question because you know, you have one camp of people who believe that the iPad and the Mac are at war with one another right it’s one or the other to the death. And then you have others who are like, no, they’re bringing them together — they’re forcing them into one single platform and there’s a grand conspiracy here,” he says.

“They are at opposite ends of a thought spectrum and reality is neither is correct, right? We pride ourselves in the fact that we work really, really, really hard to have the best products in the respective categories. The Mac is the best personal computer, it just is. Customer satisfaction would indicate that is the case, by a longshot.”

Joswiak points out that the whole PC category is growing, which he says is nice to see. But he points out that Macs are way outgrowing PCs and doing ‘quite well’. He also notes that the iPad business is still outgrowing the tablets category (while still refusing to label the iPad a tablet). 

“And it’s also the case that it’s not an ‘either or’. The majority of our Mac customers have an iPad. That’s an awesome thing. They don’t have it because they’re replacing their Mac, it’s because they use the right tool at the right time.

What’s very cool about what [Ternus] and his team have done with iPad Pro is that they’ve created something where that’s still the case for creative professionals too — the hardest to please audience. They’ve given them a tool where they can be equally at home using the Mac for their professional making money with it kind of work, and now they can pick up an iPad Pro — and they have been for multiple generations now and do things that, again, are part of how they make money, part of their creative workflow flow,” says Joswiak. “And that test is exciting. it isn’t one or the other, both of them have a role for these people.”

Since converting over to an iPad Pro as my only portable computer, I’ve been thinking a lot about the multimodal aspects of professional work. And, clearly, Apple has as well given its launch of a Pro Workflows team back in 2018. Workflows have changed massively over the last decade, and obviously the iPhone and an iPad, with their popularization of the direct manipulation paradigm, have had everything to do with that. In the current world we’re in, we’re way past ‘what is this new thing’, and we’re even way past ‘oh cool, this feels normal’ and we’re well into ‘this feels vital, it feels necessary.’ 

Contrary to some people’s beliefs, we’re never thinking about what we should not do on an iPad because we don’t want to encroach on Mac or vice versa,” says Ternus. “Our focus is, what is the best way? What is the best iPad we can make what are the best Macs we can make. Some people are going to work across both of them, some people will kind of lean towards one because it better suits their needs and that’s, that’s all good.

If you follow along, you’ll know that Apple studiously refuses to enter into the iPad vs. Mac debate — and in fact likes to place the iPad in a special place in the market that exists unchallenged. Joswiak often says that he doesn’t even like to say the word tablet.

“There’s iPads and tablets, and tablets aren’t very good. iPads are great,” Joswiak says. “We’re always pushing the boundaries with iPad Pro, and that’s what you want leaders to do. Leaders are the ones that push the boundaries leaders are the ones that take this further than has ever been taken before and the XDR display is a great example of that. Who else would you expect to do that other than us. And then once you see it, and once you use it, you won’t wonder, you’ll be glad we did.”

Image Credits: Apple

11 Laptops We’ve Tested—and Love

By Scott Gilbertson
These are our favorite Windows notebooks, MacBooks, and Chromebooks.

Solving the security challenges of public cloud

By Ram Iyer
Nick Lippis Contributor
Nick Lippis is an authority on advanced IP networks and their benefits to business objectives. He is the co-founder and co-chair of ONUG, which sponsors biannual meetings of nearly 1,000 IT business leaders of large enterprises.

Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.

Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.

Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.

Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.

Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.

Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.

The root of the problem

A few key challenges are sparking the increased number of security alerts in the public cloud:

  1. Rapid digital transformation sparked by COVID-19.
  2. An expanded network edge created by the modern, work-from-home environment.
  3. An increase in the type of security attacks.

The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.

Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.

In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.

Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.

Without a unified framework in place, the volume of incidents will spiral out of control.

Where CSNF comes into play

CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.

But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.

Here’s a closer look at the benefits a standardized approach can create for all parties:

  • End users: CSNF can streamline operations for enterprise cloud consumers, like IT teams, and allows improved visibility and greater control over the security posture of their data. This enhanced sense of protection from improved cloud governance benefits all individuals.
  • Cloud providers: CSNF can eliminate the barrier to entry currently prohibiting an enterprise consumer from using additional services from a specific cloud provider by freeing up added security resources. Also, improved end-user cloud governance encourages more cloud consumption from businesses, increasing provider revenue and providing confidence that their data will be secure.
  • Cloud vendors: Cloud vendors that provide SaaS solutions are spending more on engineering resources to deal with increased security notifications. But with a standardized framework in place, these additional resources would no longer be necessary. Instead of spending money on such specific needs along with labor, vendors could refocus core staff on improving operations and products such as user dashboards and applications.

Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.

What’s next?

CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.

The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.

Passwordstate users warned to ‘reset all passwords’ after attackers plant malicious update

By Zack Whittaker

Click Studios, the Australian software house that develops the enterprise password manager Passwordstate, has warned customers to reset passwords across their organizations after a cyberattack on the password manager.

An email sent by Click Studios to customers said the company had confirmed that attackers had “compromised” the password manager’s software update feature in order to steal customer passwords.

The email, posted on Twitter by Polish news site Niebezpiecznik early on Friday, said the malicious update exposed Passwordstate customers over a 28-hour window between April 20-22. Once installed, the malicious update contacts the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers. The email also told customers to “commence resetting all passwords contained within Passwordstate.”

🚨 Manager haseł PasswordState został zhackowany a komputery klientów zainfekowane.

Producent informuje ofiary e-mailem.

Ten manager haseł jest "korporacyjny", więc problem będzie dotyczyć przede wszystkim firm… Auć!

(Informacja od Tajemniczego Pedro) pic.twitter.com/PGHhmEKpje

— Niebezpiecznik (@niebezpiecznik) April 23, 2021

Click Studios did not say how the attackers compromised the password manager’s update feature, but emailed customers with a security fix.

The company also said the attacker’s servers were taken down on April 22. But Passwordstate users could still be at risk if the attacker’s are able to get their infrastructure online again.

Enterprise password managers let employees at companies share passwords and other sensitive secrets across their organization, such as network devices — including firewalls and VPNs, shared email accounts, internal databases, and social media accounts. Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.

Although affected customers were notified this morning, news of the breach only became widely known several hours later after Danish cybersecurity firm CSIS Group published a blog post with details of the attack.

Click Studios chief executive Mark Sanford did not respond to a request for comment outside Australian business hours.

Read more:

Window Snyder’s new startup Thistle Technologies raises $2.5M seed to secure IoT devices

By Zack Whittaker

The Internet of Things has a security problem. The past decade has seen wave after wave of new internet-connected devices, from sensors through to webcams and smart home tech, often manufactured in bulk but with little — if any — consideration to security. Worse, many device manufacturers make no effort to fix security flaws, while others simply leave out the software update mechanisms needed to deliver patches altogether.

That sets up an entire swath of insecure and unpatchable devices to fail, and destined to be thrown out when they break down or are invariably hacked.

Security veteran Window Snyder thinks there is a better way. Her new startup, Thistle Technologies, is backed with $2.5 million in seed funding from True Ventures with the goal of helping IoT manufacturers reliably and securely deliver software updates to their devices.

Snyder founded Thistle last year, and named it after the flowering plant with sharp prickles designed to deter animals from eating them. “It’s a defense mechanism,” Snyder told TechCrunch, a name that’s fitting for a defensive technology company. The startup aims to help device manufacturers without the personnel or resources to integrate update mechanisms into their device’s software in order to receive security updates and better defend against security threats.

“We’re building the means so that they don’t have to do it themselves. They want to spend the time building customer-facing features anyway,” said Snyder. Prior to founding Thistle, Snyder worked in senior cybersecurity positions at Apple, Intel, and Microsoft, and also served as chief security officer at Mozilla, Square, and Fastly.

Thistle lands on the security scene at a time when IoT needs it most. Botnet operators are known to scan the internet for devices with weak default passwords and hijack their internet connections to pummel victims with floods of internet traffic, knocking entire websites and networks offline. In 2016, a record-breaking distributed denial-of-service attack launched by the Mirai botnet on internet infrastructure giant Dyn knocked some of the biggest websites — Shopify, SoundCloud, Spotify, Twitter — offline for hours. Mirai had ensnared thousands of IoT devices into its network at the time of the attack.

Other malicious hackers target IoT devices as a way to get a foot into a victim’s network, allowing them to launch attacks or plant malware from the inside.

Since device manufacturers have done little to solve their security problems among themselves, lawmakers are looking at legislating to curb some of the more egregious security mistakes made by default manufacturers, like using default — and often unchangeable — passwords and selling devices with no way to deliver security updates.

California paved the way after passing an IoT security law in 2018, with the U.K. following shortly after in 2019. The U.S. has no federal law governing basic IoT security standards.

Snyder said the push to introduce IoT cybersecurity laws could be “an easy way for folks to get into compliance” without having to hire fleets of security engineers. Having an update mechanism in place also helps to keeps the IoT devices around for longer — potentially for years longer — simply by being able to push fixes and new features.

“To build the infrastructure that’s going to allow you to continue to make those devices resilient and deliver new functionality through software, that’s an incredible opportunity for these device manufacturers. And so I’m building a security infrastructure company to support that security needs,” she said.

With the seed round in the bank, Snyder said the company is focused on hiring device and back-end engineers, product managers, and building new partnerships with device manufacturers.

Phil Black, co-founder of True Ventures — Thistle’s seed round investor — described the company as “an astute and natural next step in security technologies.” He added: “Window has so many of the qualities we look for in founders. She has deep domain expertise, is highly respected within the security community, and she’s driven by a deep passion to evolve her industry.”

AI-driven audio cloning startup gives voice to Einstein chatbot

By Natasha Lomas

You’ll need to prick up your ears for this slice of deepfakery emerging from the wacky world of synthesized media: A digital version of Albert Einstein — with a synthesized voice that’s been (re)created using AI voice cloning technology drawing on audio recordings of the famous scientist’s actual voice.

The startup behind the ‘uncanny valley’ audio deepfake of Einstein is Aflorithmic (whose seed round we covered back in February).

While the video engine powering the 3D character rending components of this ‘digital human’ version of Einstein is the work of another synthesized media company — UneeQ — which is hosting the interactive chatbot version on its website.

Alforithmic says the ‘digital Einstein’ is intended as a showcase for what will soon be possible with conversational social commerce. Which is a fancy way of saying deepfakes that make like historical figures will probably be trying to sell you pizza soon enough, as industry watchers have presciently warned.

The startup also says it sees educational potential in bringing famous, long deceased figures to interactive ‘life’.

Or, well, an artificial approximation of it — the ‘life’ being purely virtual and Digital Einstein’s voice not being a pure tech-powered clone either; Alforithmic says it also worked with an actor to do voice modelling for the chatbot (because how else was it going to get Digital Einstein to be able to say words the real-deal would never even have dreamt of saying — like, er, ‘blockchain’?). So there’s a bit more than AI artifice going on here too.

“This is the next milestone in showcasing the technology to make conversational social commerce possible,” Alforithmic’s COO Matt Lehmann told us. “There are still more than one flaws to iron out as well as tech challenges to overcome but overall we think this is a good way to show where this is moving to.”

In a blog post discussing how it recreated Einstein’s voice the startup writes about progress it made on one challenging element associated with the chatbot version — saying it was able to shrink the response time between turning around input text from the computational knowledge engine to its API being able to render a voiced response, down from an initial 12 seconds to less than three (which it dubs “near-real-time”). But it’s still enough of a lag to ensure the bot can’t escape from being a bit tedious.

Laws that protect people’s data and/or image, meanwhile, present a legal and/or ethical challenge to creating such ‘digital clones’ of living humans — at least not without asking (and most likely paying) first.

Of course historical figures aren’t around to ask awkward questions about the ethics of their likeness being appropriated for selling stuff (if only the cloning technology itself, at this nascent stage). Though licensing rights may still apply — and do in fact in the case of Einstein.

“His rights lie with the Hebrew University of Jerusalem who is a partner in this project,” says Lehmann, before ‘fessing up to the artist licence element of the Einstein ‘voice cloning’ performance. “In fact, we actually didn’t clone Einstein’s voice as such but found inspiration in original recordings as well as in movies. The voice actor who helped us modelling his voice is a huge admirer himself and his performance captivated the character Einstein very well, we thought.”

Turns out the truth about high-tech ‘lies’ is itself a bit of a layer cake. But with deepfakes it’s not the sophistication of the technology that matters so much as the impact the content has — and that’s always going to depend upon context. And however well (or badly) the faking is done, how people respond to what they see and hear can shift the whole narrative — from a positive story (creative/educational synthesized media) to something deeply negative (alarming, misleading deepfakes).

Concern about the potential for deepfakes to become a tool for disinformation is rising, too, as the tech gets more sophisticated — helping to drive moves toward regulating AI in Europe, where the two main entities responsible for ‘Digital Einstein’ are based.

Earlier this week a leaked draft of an incoming legislative proposal on pan-EU rules for ‘high risk’ applications of artificial intelligence included some sections specifically targeted at deepfakes.

Under the plan, lawmakers look set to propose “harmonised transparency rules” for AI systems that are designed to interact with humans and those used to generate or manipulate image, audio or video content. So a future Digital Einstein chatbot (or sales pitch) is likely to need to unequivocally declare itself artificial before it starts faking it — to avoid the need for Internet users to have to apply a virtual Voight-Kampff test.

For now, though, the erudite-sounding interactive Digital Einstein chatbot still has enough of a lag to give the game away. Its makers are also clearly labelling their creation in the hopes of selling their vision of AI-driven social commerce to other businesses.

Enterprise security attackers are one password away from your worst day

By Ram Iyer
Ralph Pisani Contributor
Ralph Pisani is president at Exabeam and has 20 years of experience in sales and channel and business development at organizations like Imperva and SecureComputing (acquired by McAfee).

If the definition of insanity is doing the same thing over and over and expecting a different outcome, then one might say the cybersecurity industry is insane.

Criminals continue to innovate with highly sophisticated attack methods, but many security organizations still use the same technological approaches they did 10 years ago. The world has changed, but cybersecurity hasn’t kept pace.

Distributed systems, with people and data everywhere, mean the perimeter has disappeared. And the hackers couldn’t be more excited. The same technology approaches, like correlation rules, manual processes, and reviewing alerts in isolation, do little more than remedy symptoms while hardly addressing the underlying problem.

Credentials are supposed to be the front gates of the castle, but as the SOC is failing to change, it is failing to detect. The cybersecurity industry must rethink its strategy to analyze how credentials are used and stop breaches before they become bigger problems.

It’s all about the credentials

Compromised credentials have long been a primary attack vector, but the problem has only grown worse in the mid-pandemic world. The acceleration of remote work has increased the attack footprint as organizations struggle to secure their network while employees work from unsecured connections. In April 2020, the FBI said that cybersecurity attacks reported to the organization grew by 400% compared to before the pandemic. Just imagine where that number is now in early 2021.

It only takes one compromised account for an attacker to enter the active directory and create their own credentials. In such an environment, all user accounts should be considered as potentially compromised.

Nearly all of the hundreds of breach reports I’ve read have involved compromised credentials. More than 80% of hacking breaches are now enabled by brute force or the use of lost or stolen credentials, according to the 2020 Data Breach Investigations Report. The most effective and commonly-used strategy is credential stuffing attacks, where digital adversaries break in, exploit the environment, then move laterally to gain higher-level access.

Medtronic partners with cybersecurity startup Sternum to protect its pacemakers from hackers

By Marcella McCarthy

If you think cyberattacks are scary, what if those attacks were directed at your cardiac pacemaker? Medtronic, a medical device company, has been in hot water over the last couple of years because its pacemakers were getting hacked through their internet-based software updating systems. But in a new partnership with Sternum, an IoT cybersecurity startup based in Israel, Medtronic has focused on resolving the issue.

The problem was not with the medical devices themselves, but with the remote systems used to update the devices. Medtronic’s previous solution was to disconnect the devices from the internet, which in and of itself can cause other issues to arise.

“Medtronic was looking for a long-term solution that can help them with future developments,” said Natali Tshuva, Sternum’s founder and CEO. The company has already secured about 100,000 Medtronic devices.

Sternum’s solution allows medical devices to protect themselves in real-time. 

“There’s this endless race against vulnerability, so when a company discovers a vulnerability, they need to issue an update, but updating can be very difficult in the medical space, and until the update happens, the devices are vulnerable,” Tshuva told TechCrunch. “Therefore, we created an autonomous security that operates from within the device that can protect it without the need to update and patch vulnerabilities,” 

However, it is easier to protect new devices than to go back and protect legacy devices. Over the years hackers have gotten more and more sophisticated, so medical device companies have had to figure out how to protect the devices that are already out there.  

 “The market already has millions — perhaps billions — of medical devices connected, and that could be a security and management nightmare,” Tshuva added.

In addition to potentially doing harm to an individual, hackers have been taking advantage of device vulnerability as the gateway of choice into a hospital’s network, possibly causing a breach that can affect many more people. Tshuva explained that hospital networks are secured from the inside out, but devices that connect to the networks but are not protected can create a way in.

In fact, health systems have been known to experience the most data breaches out of any sector, accounting for 79% of all reported breaches in 2020. And in the first 10 months of last year, we saw a 45% increase in cyberattacks on health systems, according to data by Health IT Security.

In addition to Sternum’s partnership with Medtronic, the company also launched this week an IoT platform that allows, “devices to protect themselves, even when they are not connected to the internet,” Tshuva said.

Sternum, which raised about $10 million to date, also offers cybersecurity for IoT devices outside of healthcare, and according to Tshuva, the company focuses on areas that are “mission-critical.” Examples include railroad infrastructure sensors and management systems, and power grids.

Tshuva, who grew up in Israel, holds a master’s in computer science and worked for the Israeli Defense Force’s 8200 unit — similar to the U.S.’s National Security Alliance — said she always wanted to make an impact in the medical field. “I looked to combine the medical space with my life, and I realized I could have an impact on remote care devices,” she said.

Grocery startup Mercato spilled years of data, but didn’t tell its customers

By Zack Whittaker

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

Gay dating site Manhunt hacked, thousands of accounts stolen

By Zack Whittaker

Manhunt, a gay dating app that claims to have 6 million male members, has confirmed it was hit by a data breach in February after a hacker gained access to the company’s accounts database.

In a notice filed with the Washington attorney general’s office, Manhunt said the hacker “gained access to a database that stored account credentials for Manhunt users,” and “downloaded the usernames, email addresses and passwords for a subset of our users in early February 2021.

The notice did not say how the passwords were scrambled, if at all, to prevent them from being read by humans. Passwords scrambled using weak algorithms can sometimes be decoded into plain text, allowing malicious hackers to break into their accounts.

Following the breach, Manhunt force-reset account passwords began alerting users in mid-March. Manhunt did not say what percentage of its users had their data stolen or how the data breach happened, but said that more than 7,700 Washington state residents were affected.

The company’s attorneys did not reply to an email requesting comment.

But questions remain about how Manhunt handled the breach. In March, the company tweeted that, “At this time, all Manhunt users are required to update their password to ensure it meets the updated password requirements.” The tweet did not say that user accounts had been stolen.

Manhunt was launched in 2001 by Online-Buddies Inc., which also offered gay dating app Jack’d before it was sold to Perry Street in 2019 for an undisclosed sum. Just months before the sale, Jack’d had a security lapse that exposed users’ private photos and location data.

Dating sites store some of the most sensitive information on their users, and are frequently a target of malicious hackers. In 2015, Ashley Madison, a dating site that encouraged users to have an affair, was hacked, exposing names, and postal and email addresses. Several people died by suicide after the stolen data was posted online. A year later, dating site AdultFriendFinder was hacked, exposing more than 400 million user accounts.

In 2018, same-sex dating app Grindr made headlines for sharing users’ HIV status with data analytics firms.

In other cases, poor security — in some cases none at all — led to data spills involving some of the most sensitive data. In 2019, Rela, a popular dating app for gay and queer women in China, left a server unsecured with no password, allowing anyone to access sensitive data — including sexual orientation and geolocation — on more than 5 million app users. Months later, Jewish dating app JCrush exposed around 200,000 user records.

Read more: 


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

PlexTrac raises $10M Series A round for its collaboration-centric security platform

By Frederic Lardinois

PlexTrac, a Boise, ID-based security service that aims to provide a unified workflow automation platform for red and blue teams, today announced that it has raised a $10 million Series A funding round led by Noro-Moseley Partners and Madrona Venture Group. StageDot0 ventures also participated in this round, which the company plans to use to build out its team and grow its platform.

With this new round, the company, which was founded in 2018, has now raised a total of $11 million, with StageDot0 leading its 2019 seed round.

PlexTrac CEO and President Dan DeCloss

PlexTrac CEO and President Dan DeCloss

“I have been on both sides of the fence, the specialist who comes in and does the assessment, produces that 300-page report and then comes back a year later to find that some of the critical issues had not been addressed at all.  And not because the organization didn’t want to but because it was lost in that report,” PlexTrac CEO and President Dan DeCloss said. “These are some of the most critical findings for an entity from a risk perspective. By making it collaborative, both red and blue teams are united on the same goal we all share, to protect the network and assets.”

With an extensive career in security that included time as a penetration tester for Veracode and the Mayo Clinic, as well as senior information security advisor for Anthem, among other roles, DeCloss has quite a bit of first-hand experience that led him to found PlexTrac. Specifically, he believes that it’s important to break down the wall between offense-focused red teams and defense-centric blue teams.

Image Credits: PlexTrac

 

 

“Historically there has been more of the cloak and dagger relationship but those walls are breaking down– and rightfully so, there isn’t that much of that mentality today– people recognize they are on the same mission whether they are internal security team or an external team,” he said. “With the PlexTrac platform the red and blue teams have a better view into the other teams’ tactics and techniques – and it makes the whole process into an educational exercise for everyone.”

At its core, PlexTrac makes it easier for security teams to produce their reports — and hence free them up to actually focus on ‘real’ security work. To do so, the service integrates with most of the popular scanners like Qualys, and Veracode, but also tools like ServiceNow and Jira in order to help teams coordinate their workflows. All the data flows into real-time reports that then help teams monitor their security posture. The service also features a dedicated tool, WriteupsDB, for managing reusable write-ups to help teams deliver consistent reports for a variety of audiences.

“Current tools for planning, executing, and reporting on security testing workflows are either nonexistent (manual reporting, spreadsheets, documents, etc…) or exist as largely incomplete features of legacy platforms,” Madrona’s S. Somasegar and Chris Picardo write in today’s announcement. “The pain point for security teams is real and PlexTrac is able to streamline their workflows, save time, and greatly improve output quality. These teams are on the leading edge of attempting to find and exploit vulnerabilities (red teams) and defend and/or eliminate threats (blue teams).”

 

Risk startup LogicGate confirms data breach

By Zack Whittaker

Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.

An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.

The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.

“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.

LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.

But LogicGate has not made any public statement about the breach. It’s also not clear if LogicGate contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.

We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developers directly to our customers,” he said.

Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.

Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.

In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.


Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

❌