FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Yesterday — January 14th 2021Your RSS feeds

Swimm raises $5.7M to help teams document their code

By Frederic Lardinois

Most developers don’t enjoy writing documentation for their code and that makes life quite a bit harder when a new team member tries to get started on working on a company’s codebase. And even when there are documentation or in-line comments in the source code, that’s often not updated and over time, that information becomes close to irrelevant. Swimm, which today announced that it has raised a $5.7 million seed round, aims to automate as much of this process as possible after the initial documentation has been written by automatically updating it as changes are made.

The funding round was led by Pitango First, with TAU Ventures, Axon Ventures and Fundfire also investing in this round, together with a group of angel investors that include the founder of developer platform Snyk.

Image Credits: Swimm

Swimm’s marketing mostly focuses on helping teams speed up onboarding, but it’s probably a useful tool for any team. Using Swimm, you can create the standard — but auto-updated — documentation, but also walkthroughs and tutorials. Using its code browser, you can also easily find all of the documentation that relates to a given file.

The nifty part here is that while the tool can’t write the documentation for you, Swimm will automatically update any code examples in the documentation for you — or alert you when there is a major change that needs a manual update. Ideally, this will reduce the drift between the codebase and documentation.

Image Credits: Swimm

The founding team, Oren Toledano (CEO), Omer Rosenbaum (CTO), Gilad Navot (Chief Product Officer) and Tom Ahi Dror (Chief Business Officer), started working on this problem based on their experience while running Israel Tech Challenge, a coding bootcamp inspired by the training program used by the Israeli Defence Forces’ 8200 Intelligence Unit.

“We met with many companies in Israel and in the US to understand the engineering onboarding process,” Toledano told me. “And we felt that it was kind of broken — and many times, we heard the sentence: ‘we throw them to the water, and they either sink or swim.'” (That’s also why the company is called Swimm). Companies, he argues, often don’t have a way to train new employees on their code base, simply because it’s impossible for them to do so effectively without good documentation.

“The larger the company is, the more scattered the knowledge on the code base is — and a lot of this knowledge leaves the company when developers leave,” he noted.

With Swimm, a company could ideally not just offer those new hires access to tutorials that are based on the current code base, but also an easier entryway to start working on the production codebase as well.

Image Credits: Swimm

One thing that’s worth noting here is that developers run Swimm locally on a developer’s machine. In part, that’s because this approach reduces the security risks since no code is ever sent to Swimm’s servers. Indeed, the Swimm team tells me that some of its early customers are security companies. It also makes it easier for new users to get started with Swimm.

Toledano tells me that while the team mostly focused on building the core of the product and working with its early design partners (and its first set of paying customers), the plan for the next few months is to bring on more users after launching the product’s beta.

“Software development is now at the core of every modern business. Swimm provides a structured, contextual and transparent way to improve developer productivity,” said Yair Cassuto, a partner at Pitango First who is joining Swimm‘s board. “Swimm’s solution allows for rapid and insightful onboarding on any codebase. This applies across the developer life cycle: from onboarding to project transitions, adopting new open source capabilities and even offboarding.”                                                                                   

Before yesterdayYour RSS feeds

Twitter bans former Trump adviser Michael Flynn and other QAnon figures

By Taylor Hatmaker

Twitter took action against a pair of President Trump’s close associates Friday, banning them from the platform as part of a broader effort to contain the QAnon conspiracy movement.

Trump’s first national security adviser Michael Flynn and former Trump campaign lawyer Sidney Powell were both suspended under Twitter’s “coordinated harmful activity” policy. Ron Watkins, who previously ran 8kun (formerly 8chan) also saw his account removed.

“We’ve been clear that we will take strong enforcement action on behavior that has the potential to lead to offline harm, and given the renewed potential for violence surrounding this type of behavior in the coming days, we will permanently suspend accounts that are solely dedicated to sharing QAnon content,” a Twitter spokesperson told TechCrunch.

In recent months, each figure has promoted QAnon, an elaborate constellation of conspiracy theories purporting that President Trump is waging a secret battle against a cabal of political enemies who engage in child sex trafficking, among other baseless claims.

As part of Trump’s post-election legal team, Powell became a heroic figure to the QAnon crowd, which believes that a master plan being orchestrated behind the scenes will give the president a second term. Powell also amplified the Dominion voting machine conspiracy, which claims devices from that company switched votes from Trump to Biden. Dominion Voting Systems is suing Powell for $1.3 billion over the false claims, arguing that her “viral disinformation campaign” has damaged its business.

Flynn embraced the QAnon movement last year, reciting an oath and saying the popular QAnon motto “where we go one, we go all!” Flynn has also been actively involved in Trump’s quest to overturn the results of the November election. In what was then a shocking scandal, Flynn pleaded guilty for lying to the FBI in 2017. Last year, the Justice Department dropped the federal case against Flynn and Trump eventually issued his former adviser a pardon.

Of the three, Watkins is the furthest from Trump and the closest to the heart of QAnon. As the administrator of QAnon’s central online hub, Watkins played a key role QAnon’s explosion into the mainstream over the last few years. Beyond the ranks of believers, some QAnon observers believe that Ron Watkins or his father Jim Watkins are the mysterious “Q” figure, perpetuating the elaborate scheme by doling out cryptic bread crumbs for QAnon adherents.

Twitter first began restricting QAnon content in mid-2020, citing similar concerns over real-world harm. The new enforcement plan goes much further, and Twitter’s new commitment to permanently suspending any QAnon stands to have a much bigger impact.

We’ve been clear that we will take strong enforcement action on behavior that has the potential to lead to offline harm. In line with this approach, this week we are taking further action on so-called ‘QAnon’ activity across the service.

— Twitter Safety (@TwitterSafety) July 22, 2020

Facebook will turn all US political advertising off again after Georgia runoffs

By Taylor Hatmaker

Georgia is the only state in the U.S. right now where Facebook allows political ads to run, but after Tuesday’s polls close that’s set to change.

According to Facebook’s site detailing changes to its ad policies and a story from Axios, the company will no longer allow political and social issue ads anywhere in the country, Georgia included, beginning early tomorrow.

Facebook told TechCrunch that the decision to toggle political ads in Georgia off again brings that state in line with its current “nationwide pause” on social issue, election and politics ads. A Facebook spokesperson declined to say when political ads will again be allowed or if permanently blocking them from the platform is under consideration.

The company first hit pause on those ad categories November 4 as a precaution designed to reduce misinformation in the U.S. presidential election. On December 16, the company re-allowed political ads in Georgia, inviting eager campaigns to pay to get their messages in front of Facebook users. It appears that some politicians, Sen. Ted Cruz (R-TX) among them, pounced on Facebook’s Georgia loophole to raise money for themselves in spite of restrictions.

When political ads came flooding back in for Georgians, they edged out mainstream news sources, according to new reporting from The Markup. While that result is fairly intuitive, it does underline the outsized influence of targeting political advertising in Facebook’s information ecosystem.

Plenty of politicians and political groups are likely eager to get back to fundraising on Facebook. The company’s decision to keep the pause in place suggests that it’s still evaluating how — and perhaps if — it wants to handle political ads in the future. But Facebook also might be waiting for the storm to pass in light of the misinformation that plagued November’s drawn-out process of calculating election results.

It’s also worth noting that Facebook’s head of advertising integrity Rob Leathern left the company at the end of December, calling his team’s work on the 2020 U.S. election the “culmination of a huge amount of effort over several years.” Leathern helped sculpt the company’s policies around political advertising — decisions that were often controversial due to the prevalence of paid misinformation sweeping through the platform throughout 2020.

Because they will decide control of the Senate, the unusual pair of runoff races in a state that just flipped blue are high-stakes for both political parties. With a Democratic Senate, the Biden administration’s ambitious plans for things like COVID relief and the climate crisis will have a much better shot at becoming a reality. And for Republicans looking to stymie the president-elect’s policy priorities, extended control of the Senate would put a powerful barrier in Biden’s way.

 

FBI, NSA say ongoing hacks at US federal agencies ‘likely Russian in origin’

By Zack Whittaker

The U.S. government says hackers “likely Russian in origin” are responsible for breaching the networks of at least 10 U.S. federal agencies and several major tech companies, including FireEye and Microsoft.

In a joint statement published Tuesday, the FBI, the NSA, and Homeland Security’s cybersecurity advisory unit CISA said that the government was “still working to understand the scope” of the breach, but that the breaches is likely an “intelligence gathering effort.”

The agencies investigating the espionage campaign said the compromises are “ongoing.”

The statement didn’t name the breached agencies, but the Treasury, State, and the Department of Energy are among those reported to be affected.

“This is a serious compromise that will require a sustained and dedicated effort to remediate,” the statement said. “The [joint agency effort] will continue taking every necessary action to investigate, remediate, and share information with our partners and the American people,”

News of the widespread espionage campaign emerged in early December after cybersecurity giant FireEye, normally the first company that cyberattack victims will call, discovered its own network had been breached. Soon after it was reported that several government agencies had also been infiltrated.

All of the victims are customers of U.S. software firm SolarWinds, whose Orion network management tools are used across the U.S. government and Fortune 500 companies. FireEye said that hackers broke into SolarWinds’ network and pushed a tainted software update to its customers, allowing the hackers to easily break in to any one of thousands of companies and agencies that installed the backdoored update.

Some 18,000 customers downloaded the backdoored software update, but the government’s joint statement said that it believes only a “much smaller number have been compromised by follow-on activity on their systems.”

Several news outlets have previously reported that the hacks were carried out by a Russian intelligence group known as APT 29, or Cozy Bear, which has been linked to several espionage-driven attacks, including attempting to steal coronavirus vaccine research.

Tuesday’s joint statement would be the first time the government acknowledged the likely culprit behind the campaign.

Russia had previously denied involvement with the hacks.

Cyber insurance startup At-Bay raises $34M Series C, adds M12 as a new investor

By Zack Whittaker

Cybersecurity insurance startup At-Bay has raised $34 million in its Series C round, the company announced Tuesday.

The round was led by Qumra Capital, a new investor. Microsoft’s venture fund M12, also a new investor, participated in the round alongside Acrew Capital, Khosla Ventures, Lightspeed Venture Partners, Munich Re Ventures, and Israeli entrepreneur Shlomo Kramer, who co-founded security firms Check Point and Imperva.

It’s a huge move for the company, which only closed its Series B in February.

The cybersecurity insurance market is expected to become a $23 billion industry by 2025, driven in part by an explosion in connected devices and new regulatory regimes under Europe’s GDPR and more recently California’s state-wide privacy law. But where traditional insurance companies have struggled to acquire the acumen needed to accommodate the growing demand for cybersecurity insurance, startups like At-Bay have filled the space.

At-Bay was founded in 2016 by Rotem Iram and Roman Itskovich, and is headquartered in Mountain View. In the past year, the company has tripled its headcount and now has offices in New York, Atlanta, Chicago, Portland, Los Angeles, and Dallas.

The company differentiates itself from the pack by monitoring the perimeter of its customers’ networks and alerting them to security risks or vulnerabilities. By proactively looking for potential security issues, At-Bay helps its customers to prevent network intrusions and data breaches before they happen, avoiding losses for the company while reducing insurance payouts — a win-win for both the insurance provider and its customers.

“This modern approach to risk management is not only driving strong demand for our insurance, but also enabling us to improve our products and minimize loss to our insureds,” said Iram.

It’s a bet that’s paying off: the company says its frequency of claims are less than half of the industry average. Lior Litwak, a partner at M12, said he sees “immense potential” in the company for melding cyber risk and analysis with cyber insurance.

Now with its Series C in the bank, the company plans to grow its team and launch new products, while improving its automated underwriting platform that allows companies to get instant cyber insurance quotes.

Twitter now supports hardware security keys for iPhones and Android

By Zack Whittaker

Twitter said Wednesday that accounts protected with a hardware security key can now log in from their iPhone or Android device.

The social media giant rolled out support for hardware security keys in 2018, allowing users to add a physical security barrier to their accounts in place of other two-factor authentication options, like a text message or a code generated from an app.

Security keys are small enough to fit on a keyring but make certain kinds of account hacks near impossible by requiring a user to plug in the key when they log in. That means hackers on the other side of the planet can’t easily break into your account, even if they have your username and password.

But technical limitations meant that accounts protected with security keys could only log in from a computer, and not a mobile device.

Twitter solved that headache in part by switching to the WebAuthn protocol last year, which paved the way for bringing hardware security key support to more devices and browsers.

Now anyone with a security key set up on their Twitter account can use that same key to log in from their mobile device, so long as the key is supported. (A ton of security keys exist today that work across different devices, like YubiKeys and Google’s Titan key.)

Twitter — and other companies — have long recommended that high-profile accounts, like journalists, politicians, and government officials, use security keys to prevent some of the more sophisticated attacks.

Earlier this year Twitter rolled out hardware security keys to its own staff to prevent a repeat of its July cyberattack that saw hackers break into the company’s internal network and abuse an “admin” tool, which the hackers then used to hijack high-profile accounts to spread a cryptocurrency scam.

In the wake of the attack, Twitter hired Rinki Sethi as its new chief information security officer, and famed hacker Peiter Zatko, known as Mudge, as the company’s head of security.

Google acquires Actifio to step into the area of data management and business continuity

By Ingrid Lunden

In the same week that Amazon is holding its big AWS confab, Google is also announcing a move to raise its own enterprise game with Google Cloud. Today the company announced that it is acquiring Actifio, a data management company that helps companies with data continuity to be better prepared in the event of a security breach or other need for disaster recovery. The deal squares Google up as a competitor against the likes of Rubrik, another big player in data continuity.

The terms of the deal were not disclosed in the announcement; we’re looking and will update as we learn more. Notably, when the company was valued at over $1 billion in a funding round back in 2014, it had said it was preparing for an IPO (which never happened). PitchBook data estimated its value at $1.3 billion in 2018, but earlier this year it appeared to be raising money at about a 60% discount to its recent valuation, according to data provided to us by Prime Unicorn Index.

The company was also involved in a patent infringement suit against Rubrik, which it also filed earlier this year.

It had raised around $461 million, with investors including Andreessen Horowitz, TCV, Tiger, 83 North, and more.

With Actifio, Google is moving into what is one of the key investment areas for enterprises in recent years. The growth of increasingly sophisticated security breaches, coupled with stronger data protection regulation, has given a new priority to the task of holding and using business data more responsibly, and business continuity is a cornerstone of that.

Google describes the startup as as a “leader in backup and disaster recovery” providing virtual copies of data that can be managed and updated for storage, testing, and more. The fact that it covers data in a number of environments — including SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, virtual machines (VMs) in VMware, Hyper-V, physical servers, and of course Google Compute Engine — means that it also gives Google a strong play to work with companies in hybrid and multi-vendor environments rather than just all-Google shops.

“We know that customers have many options when it comes to cloud solutions, including backup and DR, and the acquisition of Actifio will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios,” writes Brad Calder, VP, engineering, in the blog post. :In addition, we are committed to supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”

The company will join Google Cloud.

“We’re excited to join Google Cloud and build on the success we’ve had as partners over the past four years,” said Ash Ashutosh, CEO at Actifio, in a statement. “Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”

The Supreme Court will hear its first big CFAA case

By Zack Whittaker

The Supreme Court will hear arguments on Monday in a case that could lead to sweeping changes to America’s controversial computer hacking laws — and affecting how millions use their computers and access online services.

The Computer Fraud and Abuse Act was signed into federal law in 1986 and predates the modern internet as we know it, but governs to this day what constitutes hacking — or “unauthorized” access to a computer or network. The controversial law was designed to prosecute hackers, but has been dubbed as the “worst law” in the technology law books by critics who say it’s outdated and vague language fails to protect good-faith hackers from finding and disclosing security vulnerabilities.

At the center of the case is Nathan Van Buren, a former police sergeant in Georgia. Van Buren used his access to a police license plate database to search for an acquaintance in exchange for cash. Van Buren was caught, and prosecuted on two counts: accepting a kickback for accessing the police database, and violating the CFAA. The first conviction was overturned, but the CFAA conviction was upheld.

Van Buren may have been allowed to access the database by way of his police work, but whether he exceeded his access remains the key legal question.

Orin Kerr, a law professor at the University of California, Berkeley, said Van Buren vs. United States was an “ideal case” for the Supreme Court to take up. “The question couldn’t be presented more cleanly,” he argued in a blog post in April.

The Supreme Court will try to clarify the decades-old law by deciding what the law means by “unauthorized” access. But that’s not a simple answer in itself.

“The Supreme Court’s opinion in this case could decide whether millions of ordinary Americans are committing a federal crime whenever they engage in computer activities that, while common, don’t comport with an online service or employer’s terms of use,” said Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford University’s law school. (Pfefferkorn’s colleague Jeff Fisher is representing Van Buren at the Supreme Court.)

How the Supreme Court will determine what “unauthorized” means is anybody’s guess. The court could define unauthorized access anywhere from violating a site’s terms of service to logging into a system that a person has no user account for.

Pfefferkorn said a broad reading of the CFAA could criminalize anything from lying on a dating profile, sharing the password to a streaming service, or using a work computer for personal use in violation of an employer’s policies.

But the Supreme Court’s eventual ruling could also have broad ramifications on good-faith hackers and security researchers, who purposefully break systems in order to make them more secure. Hackers and security researchers have for decades operated in a legal grey area because the law as written exposes their work to prosecution, even if the goal is to improve cybersecurity.

Tech companies have for years encouraged hackers to privately reach out with security bugs. In return, the companies fix their systems and pay the hackers for their work. Mozilla, Dropbox, and Tesla are among the few companies that have gone a step further by promising not to sue good-faith hackers under the CFAA. Not all companies welcome the scrutiny and bucked the trend by threatening to sue researchers over their findings, and in some cases actively launching legal action to prevent unflattering headlines.

Security researchers are no stranger to legal threats, but a decision by the Supreme Court that rules against Van Buren could have a chilling effect on their work, and drive vulnerability disclosure underground.

“If there are potential criminal (and civil) consequences for violating a computerized system’s usage policy, that would empower the owners of such systems to prohibit bona fide security research and to silence researchers from disclosing any vulnerabilities they find in those systems,” said Pfefferkorn. “Even inadvertently coloring outside the lines of a set of bug bounty rules could expose a researcher to liability.”

“The Court now has the chance to resolve the ambiguity over the law’s scope and make it safer for security researchers to do their badly-needed work by narrowly construing the CFAA,” said Pfefferkorn. “We can ill afford to scare off people who want to improve cybersecurity.”

The Supreme Court will likely rule on the case later this year, or early next.

Read more:

How to Have Productive Conversations About Election Misinfo

By Whitney Phillips
A holiday guide to navigating the deep swamp of polluted information.

Decrypted: Apple and Facebook’s privacy feud, Twitter hires Mudge, mysterious zero-days

By Zack Whittaker

Trump’s election denialism saw him retaliate in a way that isn’t just putting the remainder of his presidency in jeopardy, it’s already putting the next administration in harm’s way.

In a stunning display of retaliation, Trump fired CISA director Chris Krebs last week after declaring that there was “no evidence that any voting system deleted or lost votes, changed votes or was in any way compromised,” a direct contradiction to the conspiracy-fueled fever dreams of the president who repeatedly claimed, without evidence, that the election had been hijacked by the Democrats. CISA is left distracted by disarray, with multiple senior leaders leaving their posts — some walked, some were pushed — only for the next likely chief to stumble before he even starts because of concerns with his security clearance.

Until yesterday, Biden’s presidential transition team was stuck in cybersecurity purgatory because the incumbent administration refused to trigger the law that grants the incoming team access to government resources, including cybersecurity protections. That’s left the incoming president exposed to ongoing cyber threats, all while being shut out from classified briefings that describe those threats in detail.

As Biden builds his team, Silicon Valley is also gearing up for a change in government — and temperament. But don’t expect too much of the backlash to change. Much of the antitrust allegations, privacy violations and net neutrality remain hot button issues, and the tech titans resorting to cheap “charm offenses” are likely to face the music under the Biden administration — whether they like it or not.

Here’s more from the week.


THE BIG PICTURE

Apple and Facebook spar over privacy — again

Apple and Facebook are back in the ring, fighting over which company is a bigger existential threat to privacy. In a letter to a privacy rights group, Apple said its new anti-tracking feature will launch next year, which will give users the choice of blocking in-app tracking, a move that’s largely expected to cause havoc to the online advertising industry and data brokers.

Given an explicit option between being tracked and not, as the feature will do, most are expected to decline.

Apple’s letter specifically called out Facebook for showing a “disregard for user privacy.” Facebook, which made more than 98% of its global revenue last year from advertising, took its own potshot back at Apple, claiming the iPhone maker was “using their dominant market position to self-preference their own data collection, while making it nearly impossible for their competitors to use the same data.”

Facebook details AI advances in catching misinformation and hate speech

By Devin Coldewey

Facebook’s battle against misinformation will never be over at this rate, but that doesn’t mean the company has given up. On the contrary it is only by dint of constant improvement to its automated systems that it is able to keep itself even remotely free of hate speech and misinformation. CTO Mike Schroepfer touted the latest of those improvements today in a series of posts.

The changes are to the AI-adjacent systems the social network uses to nip the likes of spam, misleading news items and racial slurs in bud — that is to say before anyone, including Facebook’s own content moderators, sees those items.

One improvement is in the language analysis systems Facebook employs to detect things like hate speech. This is one area, Schroepfer explained, where the company has to be extremely careful. False positives in the ad space (like that something seems scammy) are low-risk, but false positives taking down posts because they’re mistaken for hate speech can be serious issues. So it’s important to be very confident when making that determination.

Unfortunately hate speech and adjacent content can be really subtle. Even something that seems indisputably racist can be inverted or subverted by a single word. Creating machine learning systems that reflect the complexity and variety of language is a task that requires exponentially increasing amounts of computing resources.

Linformer (“linear”+”transformer”) is the new tool Facebook created to manage the ballooning resource cost of scanning billions of posts a day. It approximates the central attention mechanism of transformer-based language models rather than calculating it exactly, but with few trade-offs in performance. (If you understood all that, I congratulate you.)

That translates to better language understanding but only marginally higher computation costs, meaning they don’t have to, say, use a worse model for a first wave and then only run the expensive model on suspicious items.

The company’s researchers are also working on the slightly less well-shaped problem of understanding the interaction of text, images and text in images. Fake screenshots of TV and websites, memes and other things often found in posts are amazingly difficult for computers to understand but are a huge source of information. What’s more, a single changed word can completely invert their meaning while almost all the visual details remain the same.

An example of two instances of the same misinformation with slightly different visual appearance. Aware of the left one, the system caught the right one. Image Credits: Facebook

Facebook is getting better at catching these in their infinite variety, Schroepfer said. It’s still very difficult, he said, but they’ve made huge strides in catching, for instance, COVID-19 misinformation images like fake news reports that masks cause cancer, even when the people posting them manipulate and change their look.

Deploying and maintaining these models is also complex, necessitating a constant dance of offline prototyping, deployment, online testing and bringing that feedback to a new prototype. The Reinforcement Integrity Optimizer takes a new approach, monitoring the effectiveness of new models on live content, relaying that information to the training system constantly rather than in, say, weekly reports.

Determining whether Facebook can be said to be successful is not easy. On one hand, the statistics they publish paint a rosy picture of increasing proportions of hate speech and misinformation taken down, with millions more pieces of hate speech, violent images and child exploitation content removed versus last quarter.

I asked Schroepfer how Facebook can track or express their success or failure more accurately, since numbers increases might be due to either improved mechanisms for removal or simply larger volumes of that content being taken down at the same rate.

“The baseline changes all the time, so you have to look at all these metrics together. Our north star in the long run is prevalence,” he explained, referring to the actual frequency of users encountering a given type of content rather than whether it was preemptively removed or some such. “If I take down a thousand pieces of content that people were never going to see anyway, it doesn’t matter. If I take down the one piece of content that was about to go viral, that’s a massive success.”

Facebook now includes hate speech prevalence in its quarterly “community standards enforcement report,” and it defines it as follows:

Prevalence​ estimates the percentage of times people see violating content on our platform. We calculate hate speech prevalence by selecting a sample of content seen on Facebook and then labeling how much of it violates our hate speech policies. Because hate speech depends on language and cultural context, we send these representative samples to reviewers across different languages and regions.

And for its first measure of this new statistic:

From July 2020 to September 2020 was 0.10% to 0.11%. In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech.

If this number is not misleading, it implies that one in a thousand pieces of content online right now on Facebook qualifies as hate speech. That seems rather high. (I’ve asked Facebook for a bit more clarity on this number.)

One must question the completeness of these estimates as well — reports from war-torn areas like Ethiopia suggest that they are rife with hate speech that is inadequately detected, reported and taken down. And of course the eruption of white supremacist and nationalist militia content and groups on Facebook has been well-documented.

Schroepfer emphasized that his role is very squarely in the “implementation” side of things and that questions of policy, staffing and other important parts of the social network’s vast operations are more or less out of his jurisdiction. Frankly that’s a bit of a disappointing punt by the CTO of one of the most powerful companies in the world, who seems to take these issues seriously. But one also wonders whether, had he and his teams not been so assiduous in pursuing technical remedies like the above, Facebook might have been completely snowed under with hate and fakery rather than being simply unavoidably shot through with it.

Autodesk CEO Andrew Anagnost explains the strategy behind acquiring Spacemaker

By Steve O'Hear

Autodesk, the U.S. publicly listed software and services company that targets engineering and design industries, acquired Norway’s Spacemaker this week. The startup has developed AI-supported software for urban development, something Autodesk CEO Andrew Anagnost broadly calls generative design.

The price of the acquisition is $240 million in a mostly all-cash deal. Spacemaker’s VC backers included European firms Atomico and Northzone, which co-led the company’s $25 million Series A round in 2019. Other investors on the cap table include Nordic real estate innovator NREP, Nordic property developer OBOS, U.K. real estate technology fund Round Hill Ventures and Norway’s Construct Venture.

In an interview with TechCrunch, Anagnost shared more on Autodesk’s strategy since it transformed into a cloud-first company and what attracted him to the 115-person Spacemaker team. We also delved more into Spacemaker’s mission to augment the work of humans and not only speed up the urban development design and planning process but also improve outcomes, including around sustainability and quality of life for the people who will ultimately live in the resulting spaces.

I also asked if Spacemaker sold out too early? And why did U.S.-headquartered Autodesk acquire a startup based in Norway over numerous competitors closer to home? What follows is a transcript of our Zoom call, lightly edited for length and clarity.

TechCrunch: Let’s start high-level. What is the strategy behind Autodesk acquiring Spacemaker?

Andrew Anagnost: I think Autodesk, for a while … has had a very clearly stated strategy about using the power of the cloud; cheap compute in the cloud and machine learning/artificial intelligence to kind of evolve and change the way people design things. This is something strategically we’ve been working toward for quite a while both with the products we make internally, with the capabilities we roll out that are more cutting edge and with also our initiative when we look at companies we’re interested in acquiring.

As you probably know, Spacemaker really stands out in terms of our space, the architecture space, and the engineering and owner space, in terms of applying cloud computing, artificial intelligence, data science, to really helping people explore multiple options and come up with better decisions. So it’s completely in line with the strategy that we had. We’ve been looking at them for over a year in terms of whether or not they were the right kind of company for us.

Culturally, they’re the right company. Vision and strategy-wise, they’re the right company. Also, talent-wise, they’re the right company, They really do stand out. They’ve built a real, practical, usable application that helps a segment of our population use machine learning to really create better outcomes in a critical area, which is urban redevelopment and development.

So it’s totally aligned with what we’re trying to do. It’s not only a platform for the product they do today — they have a great product that’s getting increasing adoption — but we also see the team playing an important role in the future of where we’re taking our applications. We actually see what Spacemaker has done reaching closer and closer to what Revit does [an existing Autodesk product]. Having those two applications collaborate more closely together to evolve the way people assess not only these urban planning designs that they’re focused on right now, but also in the future, other types of building projects and building analysis and building option exploration.

How did you discover Spacemaker? I mean, I’m guessing you probably looked at other companies in the space.

We’ve been watching this space for a while; the application that Spacemaker has built we would characterize it, from our terminology, as generative design for urban planning, meaning the machine generating options and option explorations for urban planning type applications, and it overlaps both architecture and owners.

Construction tech startups are poised to shake up a $1.3-trillion-dollar industry

By Walter Thompson
Allison Xu Contributor
Allison Xu is an investor at Bain Capital Ventures, where she focuses on investments in the fintech and property tech sectors.

In the wake of COVID-19 this spring, construction sites across the nation emptied out alongside neighboring restaurants, retail stores, offices and other commercial establishments. Debates ensued over whether the construction industry’s seven million employees should be considered “essential,” while regulations continued to shift on the operation of job sites. Meanwhile, project demand steadily shrank.

Amidst the chaos, construction firms faced an existential question: How will they survive? This question is as relevant today as it was in April. As one of the least-digitized sectors of our economy, construction is ripe for technology disruption.

Construction is a massive, $1.3 trillion industry in the United States — a complex ecosystem of lenders, owners, developers, architects, general contractors, subcontractors and more. While each construction project has a combination of these key roles, the construction process itself is highly variable depending on the asset type. Roughly 41% of domestic construction value is in residential property, 25% in commercial property and 34% in industrial projects. Because each asset type, and even subassets within these classes, tends to involve a different set of stakeholders and processes, most construction firms specialize in one or a few asset groups.

Regardless of asset type, there are four key challenges across construction projects:

High fragmentation: Beyond the developer, architect, engineer and general contractor, projects could involve hundreds of subcontractors with specialized expertise. As the scope of the project increases, coordination among parties becomes increasingly difficult and decision-making slows.

Poor communication: With so many different parties both in the field and in the office, it is often difficult to relay information from one party to the next. Miscommunication and poor project data accounts for 48% of all rework on U.S. construction job sites, costing the industry over $31 billion annually according to FMI research.

Lack of data transparency: Manual data collection and data entry are still common on construction sites. On top of being laborious and error-prone, the lack of real-time data is extremely limited, therefore decision-making is often based on outdated information.

Skilled labor shortage: The construction workforce is aging faster than the younger population that joins it, resulting in a shortage of labor particularly for skilled trades that may require years of training and certifications. The shortage drives up labor costs across the industry, particularly in the residential sector, which traditionally sees higher attrition due to its more variable project demand.

A construction tech boom

Too many of the key processes involved in managing multimillion-dollar construction projects are carried out on Excel or even with pen and paper. The lack of tech sophistication on construction sites materially contributes to job delays, missed budgets and increased job site safety risk. Technology startups are emerging to help solve these problems.

Here are the main categories in which we’re seeing construction tech startups emerge.

1. Project conception

  • How it works today: During a project’s conception, asset owners and/or developers develop site proposals and may work with lenders to manage the project financing.
  • Key challenges: Processes for managing construction loans are cumbersome and time intensive today given the complexity of the loan draw process.
  • How technology can address challenges: Design software such as Spacemaker AI can help developers create site proposals, while construction loan financing software such as Built Technologies and Rabbet are helping lenders and developers manage the draw process in a more efficient manner.

2. Design and engineering

  • How it works today: Developers work with design, architect and engineering teams to turn ideas into blueprints.
  • Key challenges: Because the design and engineering teams are often siloed from the contractors, it’s hard for designers and engineers to know the real-time impact of their decisions on the ultimate cost or timing of the project. Lack of coordination with construction teams can lead to time-consuming changes.
  • How technology can address challenges: Of all the elements of the construction process, the design and engineering process itself is the most technologically sophisticated today, with relatively high adoption of software like Autodesk to help with design documentation, specification development, quality assurance and more. Autodesk is moving downstream to offer a suite of solutions that includes construction management, providing more connectivity between the teams.

Databricks launches SQL Analytics

By Frederic Lardinois

AI and data analytics company Databricks today announced the launch of SQL Analytics, a new service that makes it easier for data analysts to run their standard SQL queries directly on data lakes. And with that, enterprises can now easily connect their business intelligence tools like Tableau and Microsoft’s Power BI to these data repositories as well.

SQL Analytics will be available in public preview on November 18.

In many ways, SQL Analytics is the product Databricks has long been looking to build and that brings its concept of a ‘lake house’ to life. It combines the performance of a data warehouse, where you store data after it has already been transformed and cleaned, with a data lake, where you store all of your data in its raw form. The data in the data lake, a concept that Databrick’s co-founder and CEO Ali Ghodsi has long championed, is typically only transformed when it gets used. That makes data lakes cheaper, but also a bit harder to handle for users.

Image Credits: Databricks

“We’ve been saying Unified Data Analytics, which means unify the data with the analytics. So data processing and analytics, those two should be merged. But no one picked that up,” Ghodsi told me. But ‘lake house’ caught on as a term.

“Databricks has always offered data science, machine learning. We’ve talked about that for years. And with Spark, we provide the data processing capability. You can do [extract, transform, load]. That has always been possible. SQL Analytics enables you to now do the data warehousing workloads directly, and concretely, the business intelligence and reporting workloads, directly on the data lake.”

The general idea here is that with just one copy of the data, you can enable both traditional data analyst use cases (think BI) and the data science workloads (think AI) Databricks was already known for. Ideally, that makes both use cases cheaper and simpler.

The service sits on top of an optimized version of Databricks’ open-source Delta Lake storage layer to enable the service to quickly complete queries. In addition, Delta Lake also provides auto-scaling endpoints to keep the query latency consistent, even under high loads.

While data analysts can query these data sets directly, using standard SQL, the company also built a set of connectors to BI tools. Its BI partners include Tableau, Qlik, Looker and Thoughtspot, as well as ingest partners like Fivetran, Fishtown Analytics, Talend and Matillion.

Image Credits: Databricks

“Now more than ever, organizations need a data strategy that enables speed and agility to be adaptable,” said Francois Ajenstat, Chief Product Officer at Tableau. “As organizations are rapidly moving their data to the cloud, we’re seeing growing interest in doing analytics on the data lake. The introduction of SQL Analytics delivers an entirely new experience for customers to tap into insights from massive volumes of data with the performance, reliability and scale they need.”

In a demo, Ghodsi showed me what the new SQL Analytics workspace looks like. It’s essentially a stripped-down version of the standard code-heavy experience that Databricks users are familiar with. Unsurprisingly, SQL Analytics provides a more graphical experience that focuses more on visualizations and not Python code.

While there are already some data analysts on the Databricks platform, this obviously opens up a large new market for the company — something that would surely bolster its plans for an IPO next year.

Europe urges e-commerce platforms to share data in fight against coronavirus scams

By Natasha Lomas

European lawmakers are pressing major e-commerce and media platforms to share more data with each other as a tool to fight rogue traders who are targeting consumers with coronavirus scams.

After the pandemic spread to the West, internet platforms were flooded with local ads for PPE of unknown and/or dubious quality and other dubious coronavirus offers — even after some of the firms banned such advertising.

The concern here is not only consumers being ripped off but the real risk of harm if people buy a product that does not offer the protection claimed against exposure to the virus or even get sold a bogus coronavirus “cure” when none in fact exists.

In a statement today, Didier Reynders, the EU commissioner for justice, said: “We know from our earlier experience that fraudsters see this pandemic as an opportunity to trick European consumers. We also know that working with the major online platforms is vital to protect consumers from their illegal practices. Today I encouraged the platforms to join forces and engage in a peer-to-peer exchange to further strengthen their response. We need to be even more agile during the second wave currently hitting Europe.”

The Commission said Reynders met with 11 online platforms today — including Amazon, Alibaba/AliExpress, eBay, Facebook, Google, Microsoft/Bing, Rakuten and (TechCrunch’s parent entity) Verizon Media/Yahoo — to discuss new trends and business practices linked to the pandemic and push the tech companies to do more to head off a new wave of COVID-19 scams.

In March this year EU Member States’ consumer protection authorities adopted a common position on the issue. The Commission and a pan-EU network of consumer protection enforcers has been in regular contact with the 11 platforms since then to push for a coordinated response to the threat posed by coronavirus scams.

The Commission claims the action has resulted in the platforms reporting the removal of “hundreds of millions” of illegal offers and ads. It also says they have confirmed what it describes as “a steady decline” in new coronavirus-related listings, without offering more detailed data.

In Europe, tighter regulations over what e-commerce platforms sell are coming down the pipe.

Next month regional lawmakers are set to unveil a package of legislation that will propose updates to existing e-commerce rules and aim to increase their legal responsibilities, including around illegal content and dangerous products.

In a speech last week, Commission EVP Margrethe Vestager, who heads up the bloc’s digital policy, said the Digital Services Act (DSA) will require platforms to take more responsibility for dealing with illegal content and dangerous products, including by standardizing processes for reporting illegal content and dealing with reports and complaints related to content.

A second legislative package that’s also due next month — the Digital Markets Act — will introduce additional rules for a sub-set of platforms considered to hold a dominant market position. This could include requirements that they make data available to rivals, with the aim of fostering competition in digital markets.

MEPs have also pushed for a “know your business customer” principle to be included in the DSA.

Simultaneously, the Commission has been pressing for social media platforms to open up about what it described in June as a coronavirus “infodemic” — in a bid to crack down on COVID-19-related disinformation.

Today the Commission gave an update on actions taken in the month of September by Facebook, Google, Microsoft, Twitter and TikTok to combat coronavirus disinformation — publishing its third set of monitoring reports. Thierry Breton, commissioner for the internal market, said more needs to be done there too.

“Viral spreading of disinformation related to the pandemic puts our citizens’ health and safety at risk. We need even stronger collaboration with online platforms in the coming weeks to fight disinformation effectively,” he said in a statement. 

The platforms are signatories of the EU’s (non-legally binding) Code of Practice on disinformation.

Legally binding transparency rules for platforms on tackling content such as illegal hate speech look set to be part of the DSA package. Though it remains to be seen how the fuzzier issue of “harmful content” (such as disinformation attached to a public health crisis) will be tackled.

A European Democracy Action Plan to address the disinformation issue is also slated before the end of the year.

In a pointed remark accompanying the Commission’s latest monitoring reports today, Vera Jourová, VP for values and transparency, said: “Platforms must step up their efforts to become more transparent and accountable. We need a better framework to help them do the right thing.”

Contrast launches its security observability platform

By Frederic Lardinois

Contrast, a developer-centric application security company with customers that include Liberty Mutual Insurance, NTT Data, AXA and Bandwidth, today announced the launch of its security observability platform. The idea here is to offer developers a single pane of glass to manage an application’s security across its lifecycle, combined with real-time analysis and reporting, as well as remediation tools.

“Every line of code that’s happening increases the risk to a business if it’s not secure,” said Contrast CEO and chairman Alan Nauman. “We’re focused on securing all that code that businesses are writing for both automation and digital transformation.”

Over the course of the last few years, the well-funded company, which raised a $65 million Series D round last year, launched numerous security tools that cover a wide range of use cases from automated penetration testing to cloud application security and now DevOps — and this new platform is meant to tie them all together.

DevOps, the company argues, is really what necessitates a platform like this, given that developers now push more code into production than ever — and the onus of ensuring that this code is secure is now also often on that.

Image Credits: Contrast

Traditionally, Nauman argues, security services focused on the code itself and looking at traffic.

“We think at the application layer, the same principles of observability apply that have been used in the IT infrastructure space,” he said. “Specifically, we do instrumentation of the code and we weave security sensors into the code as it’s being developed and are looking for vulnerabilities and observing running code. […] Our view is: the world’s most complex systems are best when instrumented, whether it’s an airplane, a spacecraft, an IT infrastructure. We think the same is true for code. So our breakthrough is applying instrumentation to code and observing for security vulnerabilities.”

With this new platform, Contrast is aggregating information from its existing systems into a single dashboard. And while Contrast observes the code throughout its lifecycle, it also scans for vulnerabilities whenever a developers check code into the CI/CD pipeline, thanks to integrations with most of the standard tools like Jenkins. It’s worth noting that the service also scans for vulnerabilities in open-source libraries. Once deployed, Contrast’s new platform keeps an eye on the data that runs through the various APIs and systems the application connects to and scans for potential security issues there as well.

The platform currently supports all of the large cloud providers like AWS, Azure and Google Cloud, and languages and frameworks like Java, Python, .NET and Ruby.

Image Credits: Contrast

Adobe brings its misinformation-fighting content attribution tool to the Photoshop beta

By Taylor Hatmaker

Adobe’s work on a chain of custody that could link online images back to their origins is inching closer to becoming a reality. The prototype, part of the Content Authenticity Initiative (CAI), will soon appear in the beta of Photoshop, Adobe’s ubiquitous image editing software.

Adobe says the preview of the new tool will be available to users in the beta release of Photoshop and Behance over the next few weeks. The company calls the CAI implementation “an early version” of the open standard that it will continue to hone.

The project has a few different applications. It aims to make a more robust means of keeping creators’ names attached to the content they create. But its most compelling use case for CAI would see the tool become a “tamper-proof” industry standard aimed at images used to spread misinformation.

Adobe describes the project’s mission as an effort to “increase trust and transparency online with an industry-wide attribution framework that empowers creatives and consumers alike.” The result is a technical solution that could (eventually) limit the spread of deepfakes and other kinds of misleading online content.

“… Eventually you might imagine a social feed or a news site that would allow you to filter out things that are likely to be inauthentic,” Adobe’s director of CAI, Andy Parson said earlier this year. “But the CAI steers well clear of making judgment calls — we’re just about providing that layer of transparency and verifiable data.”

The idea sounds like a spin on EXIF data, the embedded opt-in metadata that attaches information like lens type and location to an image. But Adobe says the new attribution standard will be less “brittle” and much more difficult to manipulate. The end result would have more in common with digital fingerprinting systems like the ones that identify child exploitation online than it would with EXIF.

“We believe attribution will create a virtuous cycle,” Allen said. “The more creators distribute content with proper attribution, the more consumers will expect and use that information to make judgement calls, thus minimizing the influence of bad actors and deceptive content.”

Splunk acquires Plumbr and Rigor to build out its observability platform

By Frederic Lardinois

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”

The Election Will Bring a Hurricane of Misinformation

By Whitney Phillips
Here’s how to prepare yourself for the disaster online.

We Need to Talk About Talking About QAnon

By Whitney Phillips
Describing and debunking the phenomenon is not enough. We need to explain why and how it came to be.
❌