FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The Supreme Court will hear its first big CFAA case

By Zack Whittaker

The Supreme Court will hear arguments on Monday in a case that could lead to sweeping changes to America’s controversial computer hacking laws — and affecting how millions use their computers and access online services.

The Computer Fraud and Abuse Act was signed into federal law in 1986 and predates the modern internet as we know it, but governs to this day what constitutes hacking — or “unauthorized” access to a computer or network. The controversial law was designed to prosecute hackers, but has been dubbed as the “worst law” in the technology law books by critics who say it’s outdated and vague language fails to protect good-faith hackers from finding and disclosing security vulnerabilities.

At the center of the case is Nathan Van Buren, a former police sergeant in Georgia. Van Buren used his access to a police license plate database to search for an acquaintance in exchange for cash. Van Buren was caught, and prosecuted on two counts: accepting a kickback for accessing the police database, and violating the CFAA. The first conviction was overturned, but the CFAA conviction was upheld.

Van Buren may have been allowed to access the database by way of his police work, but whether he exceeded his access remains the key legal question.

Orin Kerr, a law professor at the University of California, Berkeley, said Van Buren vs. United States was an “ideal case” for the Supreme Court to take up. “The question couldn’t be presented more cleanly,” he argued in a blog post in April.

The Supreme Court will try to clarify the decades-old law by deciding what the law means by “unauthorized” access. But that’s not a simple answer in itself.

“The Supreme Court’s opinion in this case could decide whether millions of ordinary Americans are committing a federal crime whenever they engage in computer activities that, while common, don’t comport with an online service or employer’s terms of use,” said Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford University’s law school. (Pfefferkorn’s colleague Jeff Fisher is representing Van Buren at the Supreme Court.)

How the Supreme Court will determine what “unauthorized” means is anybody’s guess. The court could define unauthorized access anywhere from violating a site’s terms of service to logging into a system that a person has no user account for.

Pfefferkorn said a broad reading of the CFAA could criminalize anything from lying on a dating profile, sharing the password to a streaming service, or using a work computer for personal use in violation of an employer’s policies.

But the Supreme Court’s eventual ruling could also have broad ramifications on good-faith hackers and security researchers, who purposefully break systems in order to make them more secure. Hackers and security researchers have for decades operated in a legal grey area because the law as written exposes their work to prosecution, even if the goal is to improve cybersecurity.

Tech companies have for years encouraged hackers to privately reach out with security bugs. In return, the companies fix their systems and pay the hackers for their work. Mozilla, Dropbox, and Tesla are among the few companies that have gone a step further by promising not to sue good-faith hackers under the CFAA. Not all companies welcome the scrutiny and bucked the trend by threatening to sue researchers over their findings, and in some cases actively launching legal action to prevent unflattering headlines.

Security researchers are no stranger to legal threats, but a decision by the Supreme Court that rules against Van Buren could have a chilling effect on their work, and drive vulnerability disclosure underground.

“If there are potential criminal (and civil) consequences for violating a computerized system’s usage policy, that would empower the owners of such systems to prohibit bona fide security research and to silence researchers from disclosing any vulnerabilities they find in those systems,” said Pfefferkorn. “Even inadvertently coloring outside the lines of a set of bug bounty rules could expose a researcher to liability.”

“The Court now has the chance to resolve the ambiguity over the law’s scope and make it safer for security researchers to do their badly-needed work by narrowly construing the CFAA,” said Pfefferkorn. “We can ill afford to scare off people who want to improve cybersecurity.”

The Supreme Court will likely rule on the case later this year, or early next.

Read more:

How to Have Productive Conversations About Election Misinfo

By Whitney Phillips
A holiday guide to navigating the deep swamp of polluted information.

Decrypted: Apple and Facebook’s privacy feud, Twitter hires Mudge, mysterious zero-days

By Zack Whittaker

Trump’s election denialism saw him retaliate in a way that isn’t just putting the remainder of his presidency in jeopardy, it’s already putting the next administration in harm’s way.

In a stunning display of retaliation, Trump fired CISA director Chris Krebs last week after declaring that there was “no evidence that any voting system deleted or lost votes, changed votes or was in any way compromised,” a direct contradiction to the conspiracy-fueled fever dreams of the president who repeatedly claimed, without evidence, that the election had been hijacked by the Democrats. CISA is left distracted by disarray, with multiple senior leaders leaving their posts — some walked, some were pushed — only for the next likely chief to stumble before he even starts because of concerns with his security clearance.

Until yesterday, Biden’s presidential transition team was stuck in cybersecurity purgatory because the incumbent administration refused to trigger the law that grants the incoming team access to government resources, including cybersecurity protections. That’s left the incoming president exposed to ongoing cyber threats, all while being shut out from classified briefings that describe those threats in detail.

As Biden builds his team, Silicon Valley is also gearing up for a change in government — and temperament. But don’t expect too much of the backlash to change. Much of the antitrust allegations, privacy violations and net neutrality remain hot button issues, and the tech titans resorting to cheap “charm offenses” are likely to face the music under the Biden administration — whether they like it or not.

Here’s more from the week.


THE BIG PICTURE

Apple and Facebook spar over privacy — again

Apple and Facebook are back in the ring, fighting over which company is a bigger existential threat to privacy. In a letter to a privacy rights group, Apple said its new anti-tracking feature will launch next year, which will give users the choice of blocking in-app tracking, a move that’s largely expected to cause havoc to the online advertising industry and data brokers.

Given an explicit option between being tracked and not, as the feature will do, most are expected to decline.

Apple’s letter specifically called out Facebook for showing a “disregard for user privacy.” Facebook, which made more than 98% of its global revenue last year from advertising, took its own potshot back at Apple, claiming the iPhone maker was “using their dominant market position to self-preference their own data collection, while making it nearly impossible for their competitors to use the same data.”

Facebook details AI advances in catching misinformation and hate speech

By Devin Coldewey

Facebook’s battle against misinformation will never be over at this rate, but that doesn’t mean the company has given up. On the contrary it is only by dint of constant improvement to its automated systems that it is able to keep itself even remotely free of hate speech and misinformation. CTO Mike Schroepfer touted the latest of those improvements today in a series of posts.

The changes are to the AI-adjacent systems the social network uses to nip the likes of spam, misleading news items and racial slurs in bud — that is to say before anyone, including Facebook’s own content moderators, sees those items.

One improvement is in the language analysis systems Facebook employs to detect things like hate speech. This is one area, Schroepfer explained, where the company has to be extremely careful. False positives in the ad space (like that something seems scammy) are low-risk, but false positives taking down posts because they’re mistaken for hate speech can be serious issues. So it’s important to be very confident when making that determination.

Unfortunately hate speech and adjacent content can be really subtle. Even something that seems indisputably racist can be inverted or subverted by a single word. Creating machine learning systems that reflect the complexity and variety of language is a task that requires exponentially increasing amounts of computing resources.

Linformer (“linear”+”transformer”) is the new tool Facebook created to manage the ballooning resource cost of scanning billions of posts a day. It approximates the central attention mechanism of transformer-based language models rather than calculating it exactly, but with few trade-offs in performance. (If you understood all that, I congratulate you.)

That translates to better language understanding but only marginally higher computation costs, meaning they don’t have to, say, use a worse model for a first wave and then only run the expensive model on suspicious items.

The company’s researchers are also working on the slightly less well-shaped problem of understanding the interaction of text, images and text in images. Fake screenshots of TV and websites, memes and other things often found in posts are amazingly difficult for computers to understand but are a huge source of information. What’s more, a single changed word can completely invert their meaning while almost all the visual details remain the same.

An example of two instances of the same misinformation with slightly different visual appearance. Aware of the left one, the system caught the right one. Image Credits: Facebook

Facebook is getting better at catching these in their infinite variety, Schroepfer said. It’s still very difficult, he said, but they’ve made huge strides in catching, for instance, COVID-19 misinformation images like fake news reports that masks cause cancer, even when the people posting them manipulate and change their look.

Deploying and maintaining these models is also complex, necessitating a constant dance of offline prototyping, deployment, online testing and bringing that feedback to a new prototype. The Reinforcement Integrity Optimizer takes a new approach, monitoring the effectiveness of new models on live content, relaying that information to the training system constantly rather than in, say, weekly reports.

Determining whether Facebook can be said to be successful is not easy. On one hand, the statistics they publish paint a rosy picture of increasing proportions of hate speech and misinformation taken down, with millions more pieces of hate speech, violent images and child exploitation content removed versus last quarter.

I asked Schroepfer how Facebook can track or express their success or failure more accurately, since numbers increases might be due to either improved mechanisms for removal or simply larger volumes of that content being taken down at the same rate.

“The baseline changes all the time, so you have to look at all these metrics together. Our north star in the long run is prevalence,” he explained, referring to the actual frequency of users encountering a given type of content rather than whether it was preemptively removed or some such. “If I take down a thousand pieces of content that people were never going to see anyway, it doesn’t matter. If I take down the one piece of content that was about to go viral, that’s a massive success.”

Facebook now includes hate speech prevalence in its quarterly “community standards enforcement report,” and it defines it as follows:

Prevalence​ estimates the percentage of times people see violating content on our platform. We calculate hate speech prevalence by selecting a sample of content seen on Facebook and then labeling how much of it violates our hate speech policies. Because hate speech depends on language and cultural context, we send these representative samples to reviewers across different languages and regions.

And for its first measure of this new statistic:

From July 2020 to September 2020 was 0.10% to 0.11%. In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech.

If this number is not misleading, it implies that one in a thousand pieces of content online right now on Facebook qualifies as hate speech. That seems rather high. (I’ve asked Facebook for a bit more clarity on this number.)

One must question the completeness of these estimates as well — reports from war-torn areas like Ethiopia suggest that they are rife with hate speech that is inadequately detected, reported and taken down. And of course the eruption of white supremacist and nationalist militia content and groups on Facebook has been well-documented.

Schroepfer emphasized that his role is very squarely in the “implementation” side of things and that questions of policy, staffing and other important parts of the social network’s vast operations are more or less out of his jurisdiction. Frankly that’s a bit of a disappointing punt by the CTO of one of the most powerful companies in the world, who seems to take these issues seriously. But one also wonders whether, had he and his teams not been so assiduous in pursuing technical remedies like the above, Facebook might have been completely snowed under with hate and fakery rather than being simply unavoidably shot through with it.

Autodesk CEO Andrew Anagnost explains the strategy behind acquiring Spacemaker

By Steve O'Hear

Autodesk, the U.S. publicly listed software and services company that targets engineering and design industries, acquired Norway’s Spacemaker this week. The startup has developed AI-supported software for urban development, something Autodesk CEO Andrew Anagnost broadly calls generative design.

The price of the acquisition is $240 million in a mostly all-cash deal. Spacemaker’s VC backers included European firms Atomico and Northzone, which co-led the company’s $25 million Series A round in 2019. Other investors on the cap table include Nordic real estate innovator NREP, Nordic property developer OBOS, U.K. real estate technology fund Round Hill Ventures and Norway’s Construct Venture.

In an interview with TechCrunch, Anagnost shared more on Autodesk’s strategy since it transformed into a cloud-first company and what attracted him to the 115-person Spacemaker team. We also delved more into Spacemaker’s mission to augment the work of humans and not only speed up the urban development design and planning process but also improve outcomes, including around sustainability and quality of life for the people who will ultimately live in the resulting spaces.

I also asked if Spacemaker sold out too early? And why did U.S.-headquartered Autodesk acquire a startup based in Norway over numerous competitors closer to home? What follows is a transcript of our Zoom call, lightly edited for length and clarity.

TechCrunch: Let’s start high-level. What is the strategy behind Autodesk acquiring Spacemaker?

Andrew Anagnost: I think Autodesk, for a while … has had a very clearly stated strategy about using the power of the cloud; cheap compute in the cloud and machine learning/artificial intelligence to kind of evolve and change the way people design things. This is something strategically we’ve been working toward for quite a while both with the products we make internally, with the capabilities we roll out that are more cutting edge and with also our initiative when we look at companies we’re interested in acquiring.

As you probably know, Spacemaker really stands out in terms of our space, the architecture space, and the engineering and owner space, in terms of applying cloud computing, artificial intelligence, data science, to really helping people explore multiple options and come up with better decisions. So it’s completely in line with the strategy that we had. We’ve been looking at them for over a year in terms of whether or not they were the right kind of company for us.

Culturally, they’re the right company. Vision and strategy-wise, they’re the right company. Also, talent-wise, they’re the right company, They really do stand out. They’ve built a real, practical, usable application that helps a segment of our population use machine learning to really create better outcomes in a critical area, which is urban redevelopment and development.

So it’s totally aligned with what we’re trying to do. It’s not only a platform for the product they do today — they have a great product that’s getting increasing adoption — but we also see the team playing an important role in the future of where we’re taking our applications. We actually see what Spacemaker has done reaching closer and closer to what Revit does [an existing Autodesk product]. Having those two applications collaborate more closely together to evolve the way people assess not only these urban planning designs that they’re focused on right now, but also in the future, other types of building projects and building analysis and building option exploration.

How did you discover Spacemaker? I mean, I’m guessing you probably looked at other companies in the space.

We’ve been watching this space for a while; the application that Spacemaker has built we would characterize it, from our terminology, as generative design for urban planning, meaning the machine generating options and option explorations for urban planning type applications, and it overlaps both architecture and owners.

Construction tech startups are poised to shake up a $1.3-trillion-dollar industry

By Walter Thompson
Allison Xu Contributor
Allison Xu is an investor at Bain Capital Ventures, where she focuses on investments in the fintech and property tech sectors.

In the wake of COVID-19 this spring, construction sites across the nation emptied out alongside neighboring restaurants, retail stores, offices and other commercial establishments. Debates ensued over whether the construction industry’s seven million employees should be considered “essential,” while regulations continued to shift on the operation of job sites. Meanwhile, project demand steadily shrank.

Amidst the chaos, construction firms faced an existential question: How will they survive? This question is as relevant today as it was in April. As one of the least-digitized sectors of our economy, construction is ripe for technology disruption.

Construction is a massive, $1.3 trillion industry in the United States — a complex ecosystem of lenders, owners, developers, architects, general contractors, subcontractors and more. While each construction project has a combination of these key roles, the construction process itself is highly variable depending on the asset type. Roughly 41% of domestic construction value is in residential property, 25% in commercial property and 34% in industrial projects. Because each asset type, and even subassets within these classes, tends to involve a different set of stakeholders and processes, most construction firms specialize in one or a few asset groups.

Regardless of asset type, there are four key challenges across construction projects:

High fragmentation: Beyond the developer, architect, engineer and general contractor, projects could involve hundreds of subcontractors with specialized expertise. As the scope of the project increases, coordination among parties becomes increasingly difficult and decision-making slows.

Poor communication: With so many different parties both in the field and in the office, it is often difficult to relay information from one party to the next. Miscommunication and poor project data accounts for 48% of all rework on U.S. construction job sites, costing the industry over $31 billion annually according to FMI research.

Lack of data transparency: Manual data collection and data entry are still common on construction sites. On top of being laborious and error-prone, the lack of real-time data is extremely limited, therefore decision-making is often based on outdated information.

Skilled labor shortage: The construction workforce is aging faster than the younger population that joins it, resulting in a shortage of labor particularly for skilled trades that may require years of training and certifications. The shortage drives up labor costs across the industry, particularly in the residential sector, which traditionally sees higher attrition due to its more variable project demand.

A construction tech boom

Too many of the key processes involved in managing multimillion-dollar construction projects are carried out on Excel or even with pen and paper. The lack of tech sophistication on construction sites materially contributes to job delays, missed budgets and increased job site safety risk. Technology startups are emerging to help solve these problems.

Here are the main categories in which we’re seeing construction tech startups emerge.

1. Project conception

  • How it works today: During a project’s conception, asset owners and/or developers develop site proposals and may work with lenders to manage the project financing.
  • Key challenges: Processes for managing construction loans are cumbersome and time intensive today given the complexity of the loan draw process.
  • How technology can address challenges: Design software such as Spacemaker AI can help developers create site proposals, while construction loan financing software such as Built Technologies and Rabbet are helping lenders and developers manage the draw process in a more efficient manner.

2. Design and engineering

  • How it works today: Developers work with design, architect and engineering teams to turn ideas into blueprints.
  • Key challenges: Because the design and engineering teams are often siloed from the contractors, it’s hard for designers and engineers to know the real-time impact of their decisions on the ultimate cost or timing of the project. Lack of coordination with construction teams can lead to time-consuming changes.
  • How technology can address challenges: Of all the elements of the construction process, the design and engineering process itself is the most technologically sophisticated today, with relatively high adoption of software like Autodesk to help with design documentation, specification development, quality assurance and more. Autodesk is moving downstream to offer a suite of solutions that includes construction management, providing more connectivity between the teams.

Databricks launches SQL Analytics

By Frederic Lardinois

AI and data analytics company Databricks today announced the launch of SQL Analytics, a new service that makes it easier for data analysts to run their standard SQL queries directly on data lakes. And with that, enterprises can now easily connect their business intelligence tools like Tableau and Microsoft’s Power BI to these data repositories as well.

SQL Analytics will be available in public preview on November 18.

In many ways, SQL Analytics is the product Databricks has long been looking to build and that brings its concept of a ‘lake house’ to life. It combines the performance of a data warehouse, where you store data after it has already been transformed and cleaned, with a data lake, where you store all of your data in its raw form. The data in the data lake, a concept that Databrick’s co-founder and CEO Ali Ghodsi has long championed, is typically only transformed when it gets used. That makes data lakes cheaper, but also a bit harder to handle for users.

Image Credits: Databricks

“We’ve been saying Unified Data Analytics, which means unify the data with the analytics. So data processing and analytics, those two should be merged. But no one picked that up,” Ghodsi told me. But ‘lake house’ caught on as a term.

“Databricks has always offered data science, machine learning. We’ve talked about that for years. And with Spark, we provide the data processing capability. You can do [extract, transform, load]. That has always been possible. SQL Analytics enables you to now do the data warehousing workloads directly, and concretely, the business intelligence and reporting workloads, directly on the data lake.”

The general idea here is that with just one copy of the data, you can enable both traditional data analyst use cases (think BI) and the data science workloads (think AI) Databricks was already known for. Ideally, that makes both use cases cheaper and simpler.

The service sits on top of an optimized version of Databricks’ open-source Delta Lake storage layer to enable the service to quickly complete queries. In addition, Delta Lake also provides auto-scaling endpoints to keep the query latency consistent, even under high loads.

While data analysts can query these data sets directly, using standard SQL, the company also built a set of connectors to BI tools. Its BI partners include Tableau, Qlik, Looker and Thoughtspot, as well as ingest partners like Fivetran, Fishtown Analytics, Talend and Matillion.

Image Credits: Databricks

“Now more than ever, organizations need a data strategy that enables speed and agility to be adaptable,” said Francois Ajenstat, Chief Product Officer at Tableau. “As organizations are rapidly moving their data to the cloud, we’re seeing growing interest in doing analytics on the data lake. The introduction of SQL Analytics delivers an entirely new experience for customers to tap into insights from massive volumes of data with the performance, reliability and scale they need.”

In a demo, Ghodsi showed me what the new SQL Analytics workspace looks like. It’s essentially a stripped-down version of the standard code-heavy experience that Databricks users are familiar with. Unsurprisingly, SQL Analytics provides a more graphical experience that focuses more on visualizations and not Python code.

While there are already some data analysts on the Databricks platform, this obviously opens up a large new market for the company — something that would surely bolster its plans for an IPO next year.

Europe urges e-commerce platforms to share data in fight against coronavirus scams

By Natasha Lomas

European lawmakers are pressing major e-commerce and media platforms to share more data with each other as a tool to fight rogue traders who are targeting consumers with coronavirus scams.

After the pandemic spread to the West, internet platforms were flooded with local ads for PPE of unknown and/or dubious quality and other dubious coronavirus offers — even after some of the firms banned such advertising.

The concern here is not only consumers being ripped off but the real risk of harm if people buy a product that does not offer the protection claimed against exposure to the virus or even get sold a bogus coronavirus “cure” when none in fact exists.

In a statement today, Didier Reynders, the EU commissioner for justice, said: “We know from our earlier experience that fraudsters see this pandemic as an opportunity to trick European consumers. We also know that working with the major online platforms is vital to protect consumers from their illegal practices. Today I encouraged the platforms to join forces and engage in a peer-to-peer exchange to further strengthen their response. We need to be even more agile during the second wave currently hitting Europe.”

The Commission said Reynders met with 11 online platforms today — including Amazon, Alibaba/AliExpress, eBay, Facebook, Google, Microsoft/Bing, Rakuten and (TechCrunch’s parent entity) Verizon Media/Yahoo — to discuss new trends and business practices linked to the pandemic and push the tech companies to do more to head off a new wave of COVID-19 scams.

In March this year EU Member States’ consumer protection authorities adopted a common position on the issue. The Commission and a pan-EU network of consumer protection enforcers has been in regular contact with the 11 platforms since then to push for a coordinated response to the threat posed by coronavirus scams.

The Commission claims the action has resulted in the platforms reporting the removal of “hundreds of millions” of illegal offers and ads. It also says they have confirmed what it describes as “a steady decline” in new coronavirus-related listings, without offering more detailed data.

In Europe, tighter regulations over what e-commerce platforms sell are coming down the pipe.

Next month regional lawmakers are set to unveil a package of legislation that will propose updates to existing e-commerce rules and aim to increase their legal responsibilities, including around illegal content and dangerous products.

In a speech last week, Commission EVP Margrethe Vestager, who heads up the bloc’s digital policy, said the Digital Services Act (DSA) will require platforms to take more responsibility for dealing with illegal content and dangerous products, including by standardizing processes for reporting illegal content and dealing with reports and complaints related to content.

A second legislative package that’s also due next month — the Digital Markets Act — will introduce additional rules for a sub-set of platforms considered to hold a dominant market position. This could include requirements that they make data available to rivals, with the aim of fostering competition in digital markets.

MEPs have also pushed for a “know your business customer” principle to be included in the DSA.

Simultaneously, the Commission has been pressing for social media platforms to open up about what it described in June as a coronavirus “infodemic” — in a bid to crack down on COVID-19-related disinformation.

Today the Commission gave an update on actions taken in the month of September by Facebook, Google, Microsoft, Twitter and TikTok to combat coronavirus disinformation — publishing its third set of monitoring reports. Thierry Breton, commissioner for the internal market, said more needs to be done there too.

“Viral spreading of disinformation related to the pandemic puts our citizens’ health and safety at risk. We need even stronger collaboration with online platforms in the coming weeks to fight disinformation effectively,” he said in a statement. 

The platforms are signatories of the EU’s (non-legally binding) Code of Practice on disinformation.

Legally binding transparency rules for platforms on tackling content such as illegal hate speech look set to be part of the DSA package. Though it remains to be seen how the fuzzier issue of “harmful content” (such as disinformation attached to a public health crisis) will be tackled.

A European Democracy Action Plan to address the disinformation issue is also slated before the end of the year.

In a pointed remark accompanying the Commission’s latest monitoring reports today, Vera Jourová, VP for values and transparency, said: “Platforms must step up their efforts to become more transparent and accountable. We need a better framework to help them do the right thing.”

Contrast launches its security observability platform

By Frederic Lardinois

Contrast, a developer-centric application security company with customers that include Liberty Mutual Insurance, NTT Data, AXA and Bandwidth, today announced the launch of its security observability platform. The idea here is to offer developers a single pane of glass to manage an application’s security across its lifecycle, combined with real-time analysis and reporting, as well as remediation tools.

“Every line of code that’s happening increases the risk to a business if it’s not secure,” said Contrast CEO and chairman Alan Nauman. “We’re focused on securing all that code that businesses are writing for both automation and digital transformation.”

Over the course of the last few years, the well-funded company, which raised a $65 million Series D round last year, launched numerous security tools that cover a wide range of use cases from automated penetration testing to cloud application security and now DevOps — and this new platform is meant to tie them all together.

DevOps, the company argues, is really what necessitates a platform like this, given that developers now push more code into production than ever — and the onus of ensuring that this code is secure is now also often on that.

Image Credits: Contrast

Traditionally, Nauman argues, security services focused on the code itself and looking at traffic.

“We think at the application layer, the same principles of observability apply that have been used in the IT infrastructure space,” he said. “Specifically, we do instrumentation of the code and we weave security sensors into the code as it’s being developed and are looking for vulnerabilities and observing running code. […] Our view is: the world’s most complex systems are best when instrumented, whether it’s an airplane, a spacecraft, an IT infrastructure. We think the same is true for code. So our breakthrough is applying instrumentation to code and observing for security vulnerabilities.”

With this new platform, Contrast is aggregating information from its existing systems into a single dashboard. And while Contrast observes the code throughout its lifecycle, it also scans for vulnerabilities whenever a developers check code into the CI/CD pipeline, thanks to integrations with most of the standard tools like Jenkins. It’s worth noting that the service also scans for vulnerabilities in open-source libraries. Once deployed, Contrast’s new platform keeps an eye on the data that runs through the various APIs and systems the application connects to and scans for potential security issues there as well.

The platform currently supports all of the large cloud providers like AWS, Azure and Google Cloud, and languages and frameworks like Java, Python, .NET and Ruby.

Image Credits: Contrast

Adobe brings its misinformation-fighting content attribution tool to the Photoshop beta

By Taylor Hatmaker

Adobe’s work on a chain of custody that could link online images back to their origins is inching closer to becoming a reality. The prototype, part of the Content Authenticity Initiative (CAI), will soon appear in the beta of Photoshop, Adobe’s ubiquitous image editing software.

Adobe says the preview of the new tool will be available to users in the beta release of Photoshop and Behance over the next few weeks. The company calls the CAI implementation “an early version” of the open standard that it will continue to hone.

The project has a few different applications. It aims to make a more robust means of keeping creators’ names attached to the content they create. But its most compelling use case for CAI would see the tool become a “tamper-proof” industry standard aimed at images used to spread misinformation.

Adobe describes the project’s mission as an effort to “increase trust and transparency online with an industry-wide attribution framework that empowers creatives and consumers alike.” The result is a technical solution that could (eventually) limit the spread of deepfakes and other kinds of misleading online content.

“… Eventually you might imagine a social feed or a news site that would allow you to filter out things that are likely to be inauthentic,” Adobe’s director of CAI, Andy Parson said earlier this year. “But the CAI steers well clear of making judgment calls — we’re just about providing that layer of transparency and verifiable data.”

The idea sounds like a spin on EXIF data, the embedded opt-in metadata that attaches information like lens type and location to an image. But Adobe says the new attribution standard will be less “brittle” and much more difficult to manipulate. The end result would have more in common with digital fingerprinting systems like the ones that identify child exploitation online than it would with EXIF.

“We believe attribution will create a virtuous cycle,” Allen said. “The more creators distribute content with proper attribution, the more consumers will expect and use that information to make judgement calls, thus minimizing the influence of bad actors and deceptive content.”

Splunk acquires Plumbr and Rigor to build out its observability platform

By Frederic Lardinois

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”

The Election Will Bring a Hurricane of Misinformation

By Whitney Phillips
Here’s how to prepare yourself for the disaster online.

Atlanta-based Speedscale now has $2.2 million more to grow its API test automation business

By Jonathan Shieber

It only took a few weeks after its Y Combinator demo day debut for the Atlanta-based API test automation company Speedscale to raise its first $2.2 million.

Founded by longtime developers and Georgia Institute of Technology alumni, Ken Ahrens, Matthew LeRay and Nate Lee had known each other for roughly twenty years before making the jump to working together.

A circuitous path of interconnecting programming jobs in the devops and monitoring space led the three men to realize that there was an opportunity to address one of the main struggles new programmers now face — making sure that updates to api integrations in a containerized programming world don’t wind up breaking apps or services.

“We were helping to solve incident outages and incidents that would cause downtime,” said Lee. “It’s hard to ensure the quality between all of these connection points [between applications]. And these connection points are growing as people add apis and containers. We said, ‘How about we solve this space? How could we preempt all of this and ensure maintaining release velocity with scalable automation?'”

Typically companies release new updates to code in a phased approach or in a test environment to ensure that they’re not going to break anything. Speedscale proposes test automation using real traffic so that developers can accelerate the release time.

“They want to change very frequently,” said Ahrens, speaking about the development life cycle. “Most of the changes are great, but every once in a while they make a change and break part of the system. The state of the art is to wait for it to be broken and get someone to fix it quickly.”

The pitch SpeedScale makes to developers is that its service can give coders the ability to see the problems before the release. They automate the creation of the staging environment, automation suite and orchestration to create that environment.

“One of the big things for me was when I saw the rise of Kubernetes was what’s really happening is that engineering leaders have been able to give more autonomy to developers, but no one has come up with a great way to validate and I really think that Speedscale can solve that problem.”

The Atlanta-based company, which only just graduated from Y Combinator a few months ago, is currently in a closed alpha with select pilot partners, according to LeRay. And the nine month-old company has raised $2.2 million from investors including Sierra Ventures from the Bay Area and Atlanta’s own Tech Square Ventures to grow the business.

“Apis are a huge market,” Ahrens said of the potential opportunity for the company. “there’s 11 million developers who develop against apis… We think the addressable market for us is in the billions.”

Twitter changes its hacked materials policy in wake of New York Post controversy

By Natasha Lomas

Twitter has announced an update to its hacked materials policy — saying it will no longer remove hacked content unless it’s directly shared by hackers or those “acting in concert with them”.

Instead of blocking such content/links from being shared on its service it says it will label tweets to “provide context”.

Wider Twitter rules against posting private information, synthetic and manipulated media, and non-consensual nudity all still apply — so it could still, for example, remove links to hacked material if the content being linked to violates other policies. But just tweeting a link to hacked materials isn’t an automatic takedown anymore.

Over the last 24 hours, we’ve received significant feedback (from critical to supportive) about how we enforced our Hacked Materials Policy yesterday. After reflecting on this feedback, we have decided to make changes to the policy and how we enforce it.

— Vijaya Gadde (@vijaya) October 16, 2020

The move comes hard on the heels of the company’s decision to restrict sharing of a New York Post article this week — which reported on claims that laptop hardware left at a repair shop contained emails and other data belonging to Hunter Biden, the son of U.S. presidential candidate Joe Biden.

The decision by Twitter to restrict sharing of the Post article attracted vicious criticism from high profile Republican voices — with the likes of senator Josh Hawley tweeting that the company is “now censoring journalists”.

Twitter’s hacked materials policy do explicitly allow “reporting on a hack, or sharing press coverage of hacking” but the company subsequently clarified that it had acted because the Post article contained “personal and private information — like email addresses and phone numbers — which violate our rules”. (Plus the Post wasn’t reporting on a hack; but rather on the claim of the discovery of a cache of emails and the emails themselves.)

At the same time the Post article itself is highly controversial. The scenario of how the data came to be in the hands of a random laptop repair shop which then chose to hand it over to a key Trump ally stretches credibility — bearing the hallmarks of an election-targeting disops operation, as we explained on Wednesday.

Given questions over the quality of the Post’s fact-checking and journalistic standards in this case, Twitter’s decision to restrict sharing of the article actually appears to have helped reduce the spread of disinformation — even as it attracted flak to the company for censoring ‘journalism’.

(It has also since emerged that the harddrive in question was manufactured shortly before the laptop was claimed to have been dropped off at the shop. So the most likely scenario is Hunter Biden’s iCloud was hacked and doctored emails planted on the drive where the data could be ‘discovered’ and leaked to the press in a ham-fisted attempt to influence the U.S. presidential election. But Twitter is clearly uncomfortable that enforcing its policy led to accusations of censoring journalists.)

In a tweet thread explaining the change to its policy, Twitter’s legal, policy and trust & safety lead, Vijaya Gadde, writes: “We want to address the concerns that there could be many unintended consequences to journalists, whistleblowers and others in ways that are contrary to Twitter’s purpose of serving the public conversation.”

She also notes that when the hacked materials policy was first introduced, in 2018, Twitter had fewer tools for policy enforcement than it does now, saying: “We’ve recently added new product capabilities, such as labels to provide people with additional context. We are no longer limited to Tweet removal as an enforcement action.”

Twitter began adding contextual labels to policy-breaching tweets by US president Donald Trump earlier this year, rather than remove his tweets altogether. It has continued to expand usage of these contextual signals — such as by adding fact-checking labels to certain conspiracy theory tweets — giving itself a ‘more speech to counteract bad speech’ enforcement tool vs the blunt instrument of tweet takedowns/account bans (which it has also applied recently to the toxic conspiracy theory group, QAnon).

“We believe that labeling Tweets and empowering people to assess content for themselves better serves the public interest and public conversation. The Hacked Material Policy is being updated to reflect these new enforcement capabilities,” Gadde also says, adding: “Content moderation is incredibly difficult, especially in the critical context of an election. We are trying to act responsibly & quickly to prevent harms, but we’re still learning along the way.”

The updated policy is clearly not a free-for-all, given all other Twitter Rules against hacked material apply (such as doxxing). Though there’s a question of whether tweets linking to the Post article would still be taken down under the updated policy if the story did indeed contain personal info (which remains against Twitter’s policy).

But the new ‘third way’ policy for hacked materials does potentially leave Twitter’s platform as a conduit for the spread of political disinformation — in instances where it’s been credulously laundered by the press. (Albeit, Twitter can justifiably point the finger of blame at poor journalist standards at that point.)

The new policy also raises the question of how Twitter will determine whether or not a person is working ‘in concert’ with hackers? Just spitballing here but if — say — on the poll’s eve, Trump were to share some highly dubious information that smeared his key political rival and which he said he’d been handed by Russian president, Vladimir Putin, would Twitter step in and remove it?

We can only hope we don’t have to find out.

Facebook and Instagram will pin vote-by-mail explainers to top of feeds

By Taylor Hatmaker

Starting this weekend, everyone of voting age in the U.S. will begin seeing informational videos at the top of Instagram and Facebook offering tips and state-specific guidance on how to vote through the mail. The videos will be offered in both English and Spanish.

The vote-by-mail videos will run on Facebook for four straight days in each state, starting between October 10 and October 18 depending on local registration deadlines. On Instagram, the videos will run in all 50 states on October 15 and October 16, followed by other notifications with vote-by-mail information over the next two days.

Facebook vote-by-mail video

Image via Facebook

Facebook vote-by-mail video

Image via Facebook

The videos let voters know when they can return a ballot in person, instruct them to sign carefully on additional envelopes that might be required and encourage returning ballots as soon as possible while being mindful of postmarking deadlines. Facebook will continue providing additional state-specific voting information in a voting information center dedicated to the 2020 election.

Even more than in past years, app makers have taken up the mantle of nudging their users to vote in the U.S. general election. From Snapchat to Credit Karma, it’s hard to open an app without being reminded to register — and that’s a good thing. Snapchat says it registered around 400,000 new voters through its own reminders and Facebook estimates that it helped 2.5 million people register to vote this year.

Voting rights advocates are concerned that 2020’s rapid scale-up of vote-by-mail might lead to many ballots being thrown out — a worry foreshadowed by the half a million ballots that were tossed out in state primaries. Some of those ballots failed to meet deadlines or were deemed invalid due to other mistakes voters made when filling them out.

In Florida, voters that were young, non-white or voting for the first time were twice as likely to have their ballots thrown out compared to white voters in the 2018 election, according to research by the ACLU.

Adding to concerns, state rules vary and they can be specific and confusing for voters new to voting through the mail. In Pennsylvania, the most likely state to decide the results of the 2020 election, new rules against “naked ballots” mean that any ballot not cast in an additional secrecy sleeve will be tossed out. In other states, secrecy sleeves have long been optional.

Facebook gets ready for November

Since 2016, Facebook has faced widespread criticism for rewarding hyper-partisan content, amplifying misinformation and incubating violent extremism. This week, the FBI revealed a plot to kidnap Michigan Governor Gretchen Whitmer that was hatched by militia groups who used the platform to organize.

Whether the public reveal of that months-long domestic terrorism investigation factored into its decisions or not, Facebook has taken a notably more aggressive posture across a handful of recent policy decisions. This week, the company expanded its ban on QAnon, the elaborate web of outlandish pro-Trump conspiracies that have increasingly spilled over into real-world violence, after that content had been allowed to thrive on the platform for years.

Facebook also just broadened its rules prohibiting voter intimidation to ban calls for poll watching that use militaristic language, like the Trump campaign’s own effort to recruit an “Army for Trump” to hold its political enemies to account on election day. The company also announced that it would suspend political advertising after election night, a policy that will likely remain in place until the results of the election are clear.

While President Trump has gone to great lengths to cast doubt on the integrity of vote-by-mail, mailed ballots are a historically very safe practice. States like Oregon and Colorado already conduct their voting through the mail in normal years, and all 50 states have absentee voting in place for people who can’t cast a ballot in person, whether they’re out of town or overseas serving in the military.

Changing how retweets work, Twitter seeks to slow down election misinformation

By Taylor Hatmaker

Twitter announced Friday a major set of changes to the way its platform would work as the social network braces for the most contentious, uncertain and potentially high-stakes election in modern U.S. history.

In what will likely be the most noticeable change, Twitter will try a new tactic to discourage users from retweeting posts without adding their own commentary. Starting on October 20 in a “global” change, the platform will prompt anyone who goes to retweet something to share a quote tweet instead. The change will stay in place through the “end of election week,” when Twitter will decide if the change needs to stick around for longer.

Gif via Twitter

“Though this adds some extra friction for those who simply want to Retweet, we hope it will encourage everyone to not only consider why they are amplifying a Tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation,” Twitter said of the change, which some users may see on Twitter for the web starting on Friday.

Twitter has in recent months been experimenting with changes that add friction to the platform. Last month, the company announced that it would roll out a test feature prompting users to click through a link before retweeting it to the platform at large. The change marks a major shift in thinking for social platforms, which grew aggressively by prioritizing engagement above all other measures.

how it started how it's going pic.twitter.com/hW53CYDfio

— Twitter Comms (@TwitterComms) October 9, 2020

The company also clarified its policy on election results, and now a candidate for office “may not claim an election win before it is authoritatively called.” Twitter will look to state election officials or projected results from at least two national news sources to make that determination.

Twitter stopped short of saying it will remove those posts, but said that it will add to any content claiming premature victory a misleading information label pointing users toward its hub for vetted election information. The company does plan to remove any tweets “meant to incite interference with the election process or with the implementation of election results,” including ones that incite violence.

Next week, Twitter will also implement new restrictions on misleading tweets it labels, showing users a pop-up prompt linking to credible information when they go to view the tweet. Twitter applies these labels to tweets that spread misinformation about COVID-19, elections and voting, and anything that contains manipulated media, like deepfakes or otherwise misleading edited videos.

The company will also take additional measures against misleading tweets that get a label when they’re from a U.S. political figure, candidate or campaign. To see a tweet with one of its labels, a user will have to tap through a warning. Labeled tweets will have likes, normal retweets and replies disabled.

These new measures will also apply to labeled tweets from anyone with more than 100,000 followers or tweets that are getting viral traction. “We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets,” Twitter said in its announcement.

Twitter warning on labeled tweet

Image via Twitter

Twitter will also turn off recommendations in the timeline in an effort to “slow down” how fast tweets can reach people from accounts they don’t follow. The company calls the decision a “worthwhile sacrifice to encourage more thoughtful and explicit amplification.” The company will also only allow trending content that comes with additional context to show up in the “for you” recommendation tab in an effort to slow the spread of misinformation.

The company acknowledges that it plays a “critical role” in protecting the U.S. election, adding that it had staffed up dedicated teams to monitoring the platform and “respond rapidly” on election night and in the potentially uncertain period of time until authoritative election results are clear.

Startups joining SK Telecom’s accelerator include AI-driven mapping and vision for delivery robots

By Mike Butcher

We don’t often cover telecom technology startups, but it’s periodically worth checking in to see what’s happening in that space. We can get a good indication from the latest cohort to emerge from an accelerator associated with South Korea’s largest wireless carrier, SK Telecom.

This group of startups will join the Telecom Infra Project accelerator in South Korea, which is part of a global program of telecoms specialist centers, and run in partnership with SK Telecom.

The cohort includes a ship-berthing monitoring system; an app that turns a group of mobile phones into a TV studio; an AI-powered indoor positioning system, which creates interactive maps; a vision system for delivery robots; and one which allows remote audiences to experience live events “together” via a digital stadium.

The selected startups include:

Dabeeo: Dabeeo’s AI-powered indoor positioning system uses vision data produced through smartphone cameras to create interactive maps, used for gaming, marketing and logistics. Crunchbase  

Neubility: Neubility develops vision-based localization and path planning technologies for last-mile delivery robots. Crunchbase

Seadronix: Seadronix is a computer vision-based ship-parking-monitoring solution that provides an AI-based berthing-monitoring system. Crunchbase  

39 degrees C: This is a mobile multi-camera live-streaming app. It directly connects multiple smartphone feeds to each other using a technology called WiFi-Direct — turning them into a TV studio. Crunchbase

Kiswe: Kiswe is a supplier of entertainment broadcast technology. Its product, CloudCast, is a “Broadcast Studio in the Cloud,” which enables partners to send a digital feed into the cloud to produce live and non-live content. Its other product, Hangtime, allows remote audiences to experience live events “together” through creating a digital stadium with chat rooms, and provides control over viewing angles from within the platform. Crunchbase

Decrypted: The major ransomware attack you probably didn’t hear about

By Zack Whittaker

Watching the news this past week was like drinking from a firehose. Speaking of which, you probably missed a busy week in cybersecurity, so here are the big stories from the past week.


THE BIG PICTURE

Blackbaud hack gets worse, as bank account data stolen

Blackbaud, a cloud technology company used by colleges, universities, nonprofits (and far-right organizations), was hit by a data-stealing ransomware attack earlier this year. The attack was one of the biggest of the year in terms of the number of organizations affected, hitting dozens of universities, hospitals and other high-profile organizations like NPR. Blackbaud said in July that it paid the ransom — but also claimed and received “confirmation” that the stolen personal data “had been destroyed,” fooling absolutely nobody.

This week Blackbaud confirmed in a regulatory filing that the stolen data also included bank account data and Social Security numbers — far more personally identifiable information than the company first thought. “In most cases, fields intended for sensitive information were encrypted and not accessible,” the company claimed.

Despite Blackbaud’s claim that the data was deleted, these are malicious hackers driven by financial reward. Hope for the best, but assume the worst — Blackbaud’s data is still out there.

Facebook shuts down malware that hijacked accounts to run ads

Hackers spent about $4 million to run scammy ads on Facebook by hijacking the accounts of unsuspecting users, reports Wired. The hackers used malware, dubbed SilentFade, to compromise Facebook accounts using stolen passwords to use whatever saved credit card details on those accounts to buy ads for diet pills and fake designer handbags.

Ringing alarm bells, Biden campaign calls Facebook ‘foremost propagator’ of voting disinformation

By Taylor Hatmaker

In a new letter to its chief executive on the eve of the first presidential debate, the Biden campaign slammed Facebook for its failure to act on false claims about voting in the U.S. election.

In the scathing letter, published by Axios, Biden Campaign Manager Jen O’Malley Dillon specifically singled out a troubling video post the Trump campaign shared to Facebook and Twitter last week.

Over the course of that video, the president’s son claims that his father’s political opponents “plan to add millions of fraudulent ballots that can cancel your vote and overturn the election” and calls on supporters to “enlist now” in an “army for Trump election security operation.” Those false claims appear to have inspired some Trump supporters, who plan to guard ballot drop-off sites and polling places — a form of voter intimidation that would likely constitute a federal crime.

When the Biden campaign (along with many others) flagged the video to Facebook, the company apparently said that the content would not be removed, pointing to its small, unobtrusive voting info labels that appear alongside all posts related to the 2020 U.S. election. The video remains up on Twitter with a similar label.

“We were assured that the label affixed to the video, buried on the top right corner of the screen where many viewers will miss it, should allay any concerns,” O’Malley Dillon wrote in the letter, addressed to Mark Zuckerberg .

“No company that considers itself a force for good in democracy, and that purports to take voter suppression seriously, would allow this dangerous claptrap to be spread to millions of people. Removing this video should have been the easiest of easy calls under your policies, yet it remains up today.”

In the letter, O’Malley Dillon also cites the president’s own repeated attempts to undermine national confidence in the 2020 election with unsubstantiated lies about the voting process, which is already under unique strain this year from the pandemic.

Rather than taking a strong approach to limit the reach of election-related disinformation from the president and his supporters, Facebook has largely remained hands-off. The platform is more comfortable touting its get out the vote campaign and other politically neutral efforts to inform and mobilize voters. Facebook clearly hopes those measures will offset its current role disseminating domestic disinformation from the president himself, but given the scope of what’s happening — and its lingering failures from 2016 — that doesn’t look likely.

“As you say, ‘voting is voice.’ Facebook has committed to not allow that voice to be drowned out by a storm of disinformation, but has failed at every opportunity to follow through on that commitment,” O’Malley Dillon wrote, adding that the Biden campaign would “be calling out those failures” over the course of the remaining 36 days until the election.

Want to hire and retain high-quality developers? Give them stimulating work

By Walter Thompson
Phil Alves Contributor
Devsquad founder and CEO Phil Alves is an expert entrepreneur with more than 15 years of experience in the tech industry leading product development teams for multiple clients.

Software developers are some of the most in-demand workers on the planet. Not only that, they’re complex creatures with unique demands in terms of how they define job fulfillment. With demand for developers on the rise (the number of jobs in the field is expected to grow by 22% over the next decade), companies are under pressure to do everything they can to attract and retain talent.

First and foremost — above salary — employers must ensure that product teams are made up of developers who feel creatively stimulated and intellectually challenged. Without work that they feel passionate about, high-quality programmers won’t just become bored and potentially seek opportunities elsewhere, the standard of work will inevitably drop. In one survey, 68% of developers said learning new things is the most important element of a job.

The worst thing for a developer to discover about a new job is that they’re the most experienced person in the room and there’s little room for their own growth.

Yet with only 32% of developers feeling “very satisfied” with their jobs, there’s scope for you to position yourself as a company that prioritizes the development of its developers, and attract and retain top talent. So, how exactly can you ensure that your team stays stimulated and creatively engaged?

Allow time for personal projects

78% of developers see coding as a hobby — and the best developers are the ones who have a true passion for software development, in and out of the workplace. This means they often have their own personal passions within the space, be it working with specific languages or platforms, or building certain kinds of applications.

Back in their 2004 IPO letter, Google founders Sergey Brin and Larry Page wrote:

We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google. [This] empowers them to be more creative and innovative. Many of our significant advances have happened in this manner.

At DevSquad, we’ve adopted a similar approach. We have an “open Friday” policy where developers are able to learn and enhance their skills through personal projects. As long as the skills being gained contribute to work we are doing in other areas, the developers can devote that time to whatever they please, whether that’s contributing to open-source projects or building a personal product. In fact, 65% of professional developers on Stack Overflow contribute to open-source projects once a year or more, so it’s likely that this is a keen interest within your development team too.

Not only does this provide a creative outlet for developers, the company also gains from the continuously expanding skillset that comes as a result.

Provide opportunities to learn and teach

One of the most demotivating things for software developers is work that’s either too difficult or too easy. Too easy, and developers get bored; too hard, and morale can dip as a project seems insurmountable. Within our team, we remain hyperaware of the difficulty levels of the project or task at hand and the level of experience of the developers involved.

❌