FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

WhatsApp faces $267M fine for breaching Europe’s GDPR

By Natasha Lomas

It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.

The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.

Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.

A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.

The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.

Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).

In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.

In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.

In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.” 

It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.

The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.

So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.

…system to add years until this fine will actually be paid – but at least it's a start… 10k cases per year to go! 😜

— Max Schrems 🇪🇺 (@maxschrems) September 2, 2021

 

Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.

WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.

Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.

And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.

Is GDPR working?  

The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, who are also of course Internet companies.

Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here, in this WhatsApp case.

Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to the draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.

Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.

While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus being pushed through by the EDPB — is a sign that the process, while slow and creaky, is working.

Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (by those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU. And the associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.

But while it’s true that a $267M penalty is still the equivalent of a parking ticket for Facebook, orders to change how such adtech giants are able to process people’s information have the potential to be a far more significant correction on problematic business models. Again, though, time will be needed to tell.

In a statement on the WhatsApp decision today, noyb — the privacy advocay group founded by long-time European privacy campaigner Max Schrems, said: We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”

Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.

In further remarks, Schrems and noyb said: “WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

Report: India may be next in line to mandate changes to Apple’s in-app payment rules

By Ingrid Lunden

Summer is still technically in session, but a snowball is slowly developing in the world of apps, and specifically the world of in-app payments. A report in Reuters today says that the Competition Commission of India, the country’s monopoly regulator, will soon be looking at an antitrust suit filed against Apple over how it mandates that app developers use Apple’s own in-app payment system — thereby giving Apple a cut of those payments — when publishers charge users for subscriptions and other items in their apps.

The suit, filed by an Indian non-profit called “Together We Fight Society”, said in a statement to Reuters that it was representing consumer and startup interests in its complaint.

The move would be the latest in what has become a string of challenges from national regulators against app store operators — specifically Apple but also others like Google and WeChat — over how they wield their positions to enforce market practices that critics have argued are anti-competitive. Other countries that have in recent weeks reached settlements, passed laws, or are about to introduce laws include Japan, South Korea, Australia, the U.S. and the European Union.

And in India specifically, the regulator is currently working through a similar investigation as it relates to in-app payments in Android apps, which Google mandates use its proprietary payment system. Google and Android dominate the Indian smartphone market, with the operating system active on 98% of the 520 million devices in use in the country as of the end of 2020.

It will be interesting to watch whether more countries wade in as a result of these developments. Ultimately, it could force app store operators, to avoid further and deeper regulatory scrutiny, to adopt new and more flexible universal policies.

In the meantime, we are seeing changes happen on a country-by-country basis.

Just yesterday, Apple reached a settlement in Japan that will let publishers of “reader” apps (those for using or consuming media like books and news, music, files in the cloud and more) to redirect users to external sites to provide alternatives to Apple’s proprietary in-app payment provision. Although it’s not as seamless as paying within the app, redirecting previously was typically not allowed, and in doing so the publishers can avoid Apple’s cut.

South Korean legislators earlier this week approved a measure that will make it illegal for Apple and Google to make a commission by forcing developers to use their proprietary payment systems.

And last week, Apple also made some movements in the U.S. around allowing alternative forms of payments, but relatively speaking the concessions were somewhat indirect: app publishers can refer to alternative, direct payment options in apps now, but not actually offer them. (Not yet at least.)

Some developers and consumers have been arguing for years that Apple’s strict policies should open up more. Apple however has long said in its defense that it mandates certain developer policies to build better overall user experiences, and for reasons of security. But, as app technology has evolved, and consumer habits have changed, critics believe that this position needs to be reconsidered.

One factor in Apple’s defense in India specifically might be the company’s position in the market. Android absolutely dominates India when it comes to smartphones and mobile services, with Apple actually a very small part of the ecosystem.

As of the end of 2020, it accounted for just 2% of the 520 million smartphones in use in the country, according to figures from Counterpoint Research quoted by Reuters. That figure had doubled in the last five years, but it’s a long way from a majority, or even significant minority.

The antitrust filing in India has yet to be filed formally, but Reuters notes that the wording leans on the fact that anti-competitive practices in payments systems make it less viable for many publishers to exist at all, since the economics simply do not add up:

“The existence of the 30% commission means that some app developers will never make it to the market,” Reuters noted from the filing. “This could also result in consumer harm.”

Reuters notes that the CCI will be reviewing the case in the coming weeks before deciding whether it should run a deeper investigation or dismiss it. It typically does not publish filings during this period.

Apple announces new settlement with Japan allowing developers to link to external websites  

By Kate Park

Apple has made a settlement with Japanese regulator that it will allow developers of “reader” apps link to their own website for managing users account. The change goes to effect in early 2022.

This settlement comes after the Japan Fair Trade Commission (JFTC) has forced Apple to make a change its polices on the reader apps, like Netflix, Spotify, Audible and Dropbox, that provide purchased content or content subscriptions for digital magazines, newspapers, books, audio, music, and video.

“We have great respect for the Japan Fair Trade Commission and appreciate the work we’ve done together, which will help developers of reader apps make it easier for users to set up an manage their apps and service, while protecting their privacy and maintaining their trust,” Phill Schiller, who oversees the App Stores at Apple.

Before the change takes into effect next year, Apple will continue to update its guidelines and review process for users of readers app to make sure to be a better marketplace for users and developers alike, according to its statement.

Apple Stores announced last week several updates, which allow developers more flexibility for their customer, and the company also launched the News Partner Program to support local journalists.

Apple will also apply this change globally to all reader apps on the store.

Global lawmakers have been increasingly under scrutiny over the market dominance of Apple and other tech giants. Australia’s Competitions and Consumer Commission is also considering regulations for the digital payment system of Apple, Google and WeChat while South Korea became the first country to curb Apple and Google from imposing their own payment system on in-app purchases.

Apple provides more than 30 million registered developers.

How a Vungle-owned mobile marketer sent Fontmaker to the top of the App Store

By Sarah Perez

Does this sound familiar? An app goes viral on social media, often including TikTok, then immediately climbs to the top of the App Store where it gains even more new installs thanks to the heightened exposure. That’s what happened with the recent No. 1 on the U.S. App Store, Fontmaker, a subscription-based fonts app which appeared to benefit from word-of-mouth growth thanks to TikTok videos and other social posts. But what we’re actually seeing here is a new form of App Store marketing — and one which now involves one of the oldest players in the space: Vungle.

Fontmaker, at first glance, seems to be just another indie app that hit it big.

The app, published by an entity called Mango Labs, promises users a way to create fonts using their own handwriting which they can then access from a custom keyboard for a fairly steep price of $4.99 per week. The app first launched on July 26. Nearly a month later, it was the No. 2 app on the U.S. App Store, according to Sensor Tower data. By August 26, it climbed up one more position to reach No. 1. before slowly dropping down in the top overall free app rankings in the days that followed.

By Aug. 27, it was No. 15, before briefly surging again to No. 4 the following day, then declining once more. Today, the app is No. 54 overall and No. 4 in the competitive Photo & Video category — still, a solid position for a brand-new and somewhat niche product targeting mainly younger users. To date, it’s generated $68,000 in revenue, Sensor Tower reports.

But Fontmaker may not be a true organic success story, despite its Top Charts success driven by a boost in downloads coming from real users, not bots. Instead, it’s an example of how mobile marketers have figured out how to tap into the influencer community to drive app installs. It’s also an example of how it’s hard to differentiate between apps driven by influencer marketing and those that hit the top of the App Store because of true demand — like walkie-talkie app Zello, whose recent trip to No. 1 can be attributed to Hurricane Ida

As it turns out, Fontmaker is not your typical “indie app.” In fact, it’s unclear who’s really behind it. Its publisher, Mango Labs, LLC, is actually an iTunes developer account owned by the mobile growth company JetFuel, which was recently acquired by the mobile ad and monetization firm Vungle — a longtime and sometimes controversial player in this space, itself acquired by Blackstone in 2019.

Vungle was primarily interested in JetFuel’s main product, an app called The Plug, aimed at influencers.

Through The Plug, mobile app developers and advertisers can connect to JetFuel’s network of over 15,000 verified influencers who have a combined 4 billion Instagram followers, 1.5 billion TikTok followers, and 100 million daily Snapchat views.

While marketers could use the built-in advertising tools on each of these networks to try to reach their target audience, JetFuel’s technology allows marketers to quickly scale their campaigns to reach high-value users in the Gen Z demographic, the company claims. This system can be less labor-intensive than traditional influencer marketing, in some cases. Advertisers pay on a cost-per-action (CPA) basis for app installs. Meanwhile, all influencers have to do is scroll through The Plug to find an app to promote, then post it to their social accounts to start making money.

Image Credits: The Plug’s website, showing influencers how the platform works

So while yes, a lot of influencers may have made TikTok videos about Fontmaker, which prompted consumers to download the app, the influencers were paid to do so. (And often, from what we saw browsing the Fontmaker hashtag, without disclosing that financial relationship in any way — an increasingly common problem on TikTok, and area of concern for the FTC.)

Where things get tricky is in trying to sort out Mango Labs’ relationship with JetFuel/Vungle. As a consumer browsing the App Store, it looks like Mango Labs makes a lot of fun consumer apps of which Fontmaker is simply the latest.

JetFuel’s website helps to promote this image, too.

It had showcased its influencer marketing system using a case study from an “indie developer” called Mango Labs and one of its earlier apps, Caption Pro. Caption Pro launched in Jan. 2018. (App Annie data indicates it was removed from the App Store on Aug. 31, 2021…yes, yesterday).

Image Credits: App Annie

Vungle, however, told TechCrunch “The Caption Pro app no longer exists and has not been live on the App Store or Google Play for a long time.” (We can’t find an App Annie record of the app on Google Play).

They also told us that “Caption Pro was developed by Mango Labs before the entity became JetFuel,” and that the case study was used to highlight JetFuel’s advertising capabilities. (But without clearly disclosing their connection.)

“Prior to JetFuel becoming the influencer marketing platform that it is today, the company developed apps for the App Store. After the company pivoted to become a marketing platform, in February 2018, it stopped creating apps but continued to use the Mango Labs account on occasion to publish apps that it had third-party monetization partnerships with,” the Vungle spokesperson explained.

In other words, the claim being made here is that while Mango Labs, originally, were the same folks who have long since pivoted to become JetFuel, and the makers of Caption Pro, all the newer apps published under “Mango Labs, LLC” were not created by JetFuel’s team itself.

“Any apps that appear under the Mango Labs LLC name on the App Store or Google Play were in fact developed by other companies, and Mango Labs has only acted as a publisher,” the spokesperson said.

Image Credits: JetFuel’s website describing Mango Labs as an “indie developer”

There are reasons why this statement doesn’t quite sit right — and not only because JetFuel’s partners seem happy to hide themselves behind Mango Labs’ name, nor because Mango Labs was a project from the JetFuel team in the past. It’s also odd that Mango Labs and another entity, Takeoff Labs, claim the same set of apps. And like Mango Labs, Takeoff Labs is associated with JetFuel too.

Breaking this down, as of the time of writing, Mango Labs has published several consumer apps on both the App Store and Google Play.

On iOS, this includes the recent No. 1 app Fontmaker, as well as FontKey, Color Meme, Litstick, Vibe, Celebs, FITme Fitness, CopyPaste, and Part 2. On Google Play, it has two more: Stickered and Mango.

Image Credits: Mango Labs

Most of Mango Labs’ App Store listings point to JetFuel’s website as the app’s “developer website,” which would be in line with what Vungle says about JetFuel acting as the apps’ publisher.

What’s odd, however, is that the Mango Labs’ app Part2, links to Takeoff Labs’ website from its App Store listing.

The Vungle spokesperson initially told us that Takeoff Labs is “an independent app developer.”

And yet, the Takeoff Labs’ website shows a team which consists of JetFuel’s leadership, including JetFuel co-founder and CEO Tim Lenardo and JetFuel co-founder and CRO JJ Maxwell. Takeoff Labs’ LLC application was also signed by Lenardo.

Meanwhile, Takeoff Labs’ co-founder and CEO Rhai Goburdhun, per his LinkedIn and the Takeoff Labs website, still works there. Asked about this connection, Vungle told us they did not realize the website had not been updated, and neither JetFuel nor Vungle have an ownership stake in Takeoff Labs with this acquisition.

Image Credits: Takeoff Labs’ website showing its team, including JetFuel’s co-founders.

Takeoff Labs’ website also shows off its “portfolio” of apps, which includes Celeb, Litstick, and FontKey — three apps that are published by Mango Labs on the App Store.

On Google Play, Takeoff Labs is the developer credited with Celebs, as well as two other apps, Vibe and Teal, a neobank. But on the App Store, Vibe is published by Mango Labs.

Image Credits: Takeoff Labs’ website, showing its app portfolio.

(Not to complicate things further, but there’s also an entity called RealLabs which hosts JetFuel, The Plug and other consumer apps, including Mango — the app published by Mango Labs on Google Play. Someone sure likes naming things “Labs!”)

Vungle claims the confusion here has to do with how it now uses the Mango Labs iTunes account to publish apps for its partners, which is a “common practice” on the App Store. It says it intends to transfer the apps published under Mango Labs to the developers’ accounts, because it agrees this is confusing.

Vungle also claims that JetFuel “does not make nor own any consumer apps that are currently live on the app stores. Any of the apps made by the entity when it was known as Mango Labs have long since been taken down from the app stores.”

JetFuel’s system is messy and confusing, but so far successful in its goals. Fontmaker did make it to No. 1, essentially growth hacked to the top by influencer marketing.

Congrats to @Rhai_Gb & the @Takeoff_Labs team- it's great to be back at #1 overall 🙌

Also a huge accomplishment for @jetfuel_it as the only user acquisition source. The first time we've single-handedly moved an app to #1 Top Free! https://t.co/Cl8ahj8Owo

— Tim L (@telenardo) August 25, 2021

But as a consumer, what this all means is that you’ll never know who actually built the app you’re downloading or whether you were “influenced” to try it through what were, essentially, undisclosed ads.

Fontmaker isn’t the first to growth hack its way to the top through influencer promotions. Summertime hit Poparrazzi also hyped itself to the top of the App Store in a similar way, as have many others. But Poparazzi has since sunk to No. 89 in Photo & Video, which shows influence can only take you so far.

As for Fontmaker, paid influence got it to No. 1, but its Top Chart moment was brief.

SEC fines brokerage firms over email hacks that exposed client data

By Carly Page

The U.S. Securities and Exchange Commission has fined several brokerage firms a total of $750,000 for exposing the sensitive personally identifiable information of thousands of customers and clients after hackers took over employee email accounts.

A total of eight entities belonging to three companies have been sanctioned by the SEC, including Cetera (Advisor Networks, Investment Services, Financial Specialists, Advisors and Investment Advisers), Cambridge Investment Research (Investment Research and Investment Research Advisors) and KMS Financial Services.

In a press release, the SEC announced that it had sanctioned the firms for failures in their cybersecurity policies and procedures that allowed hackers to gain unauthorized access to cloud-based email accounts, exposing the personal information of thousands of customers and clients at each firm.

In the case of Cetera, the SEC said that cloud-based email accounts of more than 60 employees were infiltrated by unauthorized third parties for more than three years, exposing at least 4,388 clients’ personal information.

The order states that none of the accounts featured the protections required by Cetera’s policies, and the SEC also charged two of the Cetera entities with sending breach notifications to clients containing “misleading language suggesting that the notifications were issued much sooner than they actually were after discovery of the incidents.”

The SEC’s order against Cambridge concludes that the personal information exposure of at least 2,177 Cambridge customers and clients was the result of lax cybersecurity practices at the firm. 

“Although Cambridge discovered the first email account takeover in January 2018, it failed to adopt and implement firm-wide enhanced security measures for cloud-based email accounts of its representatives until 2021, resulting in the exposure and potential exposure of additional customer and client records and information,” the SEC said. 

The order against KMS is similar; the SEC’s order states that the data of almost 5,000 customers and clients were exposed as a result of the company’s failure to adopt written policies and procedures requiring additional firm-wide security measures until May 2020. 

“Investment advisers and broker-dealers must fulfill their obligations concerning the protection of customer information,” said Kristina Littman, chief of the SEC Enforcement Division’s Cyber Unit. “It is not enough to write a policy requiring enhanced security measures if those requirements are not implemented or are only partially implemented, especially in the face of known attacks.”

All of the parties agreed to resolve the charges and to not commit future violations of the charged provisions, without admitting or denying the SEC’s findings. As part of the settlements, Cetera will pay a penalty of $300,000, while Cambridge and KMS will pay fines of $250,000 and $200,000 respectively.  

Cambridge told TechCrunch that it does not comment on regulatory matters, but said it has and does maintain a comprehensive information security group and procedures to ensure clients’ accounts are fully protected. Cetera and KMS have yet to respond.

This latest action by the SEC comes just weeks after the Commission ordered London-based publishing and education giant Pearson to pay a $1 million fine for misleading investors about a 2018 data breach at the company.

The stars are aligning for federal IT open source software adoption

By Ram Iyer
Venky Adivi Contributor
Venky Adivi is director of strategic capture and proposal management at Canonical, publisher of Ubuntu.

In recent years, the private sector has been spurning proprietary software in favor of open source software and development approaches. For good reason: The open source avenue saves money and development time by using freely available components instead of writing new code, enables new applications to be deployed quickly and eliminates vendor lock-in.

The federal government has been slower to embrace open source, however. Efforts to change are complicated by the fact that many agencies employ large legacy IT infrastructure and systems to serve millions of people and are responsible for a plethora of sensitive data. Washington spends tens of billions every year on IT, but with each agency essentially acting as its own enterprise, decision-making is far more decentralized than it would be at, say, a large bank.

While the government has made a number of moves in a more open direction in recent years, the story of open source in federal IT has often seemed more about potential than reality.

But there are several indications that this is changing and that the government is reaching its own open source adoption tipping point. The costs of producing modern applications to serve increasingly digital-savvy citizens keep rising, and agencies are budget constrained to find ways to improve service while saving taxpayer dollars.

Sheer economics dictate an increased role for open source, as do a variety of other benefits. Because its source code is publicly available, open source software encourages continuous review by others outside the initial development team to promote increased software reliability and security, and code can be easily shared for reuse by other agencies.

Here are five signs I see that the U.S. government is increasingly rallying around open source.

More dedicated resources for open source innovation

Two initiatives have gone a long way toward helping agencies advance their open source journeys.

18F, a team within the General Services Administration that acts as consultancy to help other agencies build digital services, is an ardent open source backer. Its work has included developing a new application for accessing Federal Election Commission data, as well as software that has allowed the GSA to improve its contractor hiring process.

18F — short for GSA headquarters’ address of 1800 F St. — reflects the same grassroots ethos that helped spur open source’s emergence and momentum in the private sector. “The code we create belongs to the public as a part of the public domain,” the group says on its website.

Five years ago this August, the Obama administration introduced a new Federal Source Code Policy that called on every agency to adopt an open source approach, create a source code inventory, and publish at least 20% of written code as open source. The administration also launched Code.gov, giving agencies a place to locate open source solutions that other departments are already using.

The results have been mixed, however. Most agencies are now consistent with the federal policy’s goal, though many still have work to do in implementation, according to Code.gov’s tracker. And a report by a Code.gov staffer found that some agencies were embracing open source more than others.

Still, Code.gov says the growth of open source in the federal government has gone farther than initially estimated.

A push from the new administration

The American Rescue Plan, a $1.9 trillion pandemic relief bill that President Biden signed in early March 2021, contained $9 billion for the GSA’s Technology Modernization Fund, which finances new federal technology projects. In January, the White House said upgrading federal IT infrastructure and addressing recent breaches such as the SolarWinds hack was “an urgent national security issue that cannot wait.”

It’s fair to assume open source software will form the foundation of many of these efforts, because White House technology director David Recordon is a long-time open source advocate and once led Facebook’s open source projects.

A changing skills environment

Federal IT employees who spent much of their careers working on legacy systems are starting to retire, and their successors are younger people who came of age in an open source world and are comfortable with it.

About 81% of private sector hiring managers surveyed by the Linux Foundation said hiring open source talent is a priority and that they’re more likely than ever to seek out professionals with certifications. You can be sure the public sector is increasingly mirroring this trend as it recognizes a need for talent to support open source’s growing foothold.

Stronger capabilities from vendors

By partnering with the right commercial open source vendor, agencies can drive down infrastructure costs and more efficiently manage their applications. For example, vendors have made great strides in addressing security requirements laid out by policies such as the Federal Security Security Modernization Act (FISMA), Federal Information Processing Standards (FIPS) and the Federal Risk and Authorization Management Program (FedRamp), making it easy to deal with compliance.

In addition, some vendors offer powerful infrastructure automation tools and generous support packages, so federal agencies don’t have to go it alone as they accelerate their open source strategies. Linux distributions like Ubuntu provide a consistent developer experience from laptop/workstation to the cloud, and at the edge, for public clouds, containers, and physical and virtual infrastructure.

This makes application development a well-supported activity that includes 24/7 phone and web support, which provides access to world-class enterprise support teams through web portals, knowledge bases or via phone.

The pandemic effect

Whether it’s accommodating more employees working from home or meeting higher citizen demand for online services, COVID-19 has forced large swaths of the federal government to up their digital game. Open source allows legacy applications to be moved to the cloud, new applications to be developed more quickly, and IT infrastructures to adapt to rapidly changing demands.

As these signs show, the federal government continues to move rapidly from talk to action in adopting open source.

Who wins? Everyone!

Founders Fund backs Royal, a music marketplace planning to sell song rights as NFTs

By Lucas Matney

Founders Fund and Paradigm are leading an investment in a platform that’s aiming to wed music rights with NFTs, allowing user to buy shares of songs through the company’s marketplace, earning royalties as the music they’ve invested in gains popularity.

The venture, called Royal, is led by Justin Blau, an EDM artist who performs under the name 3LAU, and JD Ross, a co-founder of home-buying startup Opendoor. Blau has been one of the more active and visible figures in the NFT community, launching a number of upstart efforts aimed at exploring how musicians can monetize their work through crypto markets. Blau says that as Covid cut off his ability to tour, he dug into NFTs full-time, aiming to find a way to flip the power dynamics on “platforms that were extracting all the value from creators.

Back in March, weeks before many would first hear about NFTs following the $69 million Beeple sale at Christies, Blau set his own record, selling a batch of custom songs and custom artwork for a collective $11.7 million worth of cryptocurrency.

Royal’s investment announcement comes just as a broader bull run for the NFT market seems to reach a fever pitch with investors dumping hundreds of million of dollars worth of cryptocurrencies into community NFT projects like CryptoPunks and Bored Apes. While visual artists interested in putting their digital works on the blockchain have seen a number of platforms spring up and mature in recent months to simplify the process of monetizing their art, there have been fewer efforts focused on musicians.

Paradigm and Founders Fund are leading a $16 million seed round in Royal, with participation from Atomic — where Ross was recently a General Partner. Ross’s fellow Opendoor co-founder Keith Rabois led the deal for Founders Fund.

The company isn’t sharing an awful lot about their launch or product plans, including when the platform will actually begin selling fractionalized assets, but it seems pretty clear the company will be heavily leveraging Blau’s music and position inside the music industry to bring early fans/investors to the platform. Users can sign-up for early access on the site currently.

As NFT startups chase more complex ownership splits that aim to help creators share their success with fans, there’s plenty of speculation taking off around how regulators will eventually treat them. While the ICO boom of 2017 led to plenty of founders receiving SEC letters alleging securities fraud, entrepreneurs in this wave seem to be working a little harder to avoid that outcome. Blau says that the startup’s team is working closely with legal counsel to ensure the startup is staying fully compliant.

The company’s bigger challenge may be ensuring that democratizing access to buying up music rights actually benefits the fans of those artists or creates new fans for them, given the wide landscape of crypto speculators looking to diversify. That said, Blau notes there’s plenty of room for improvement among the current ownership spread of music royalties, largely spread among labels, private equity groups and hedge funds.

“A true fan might want to own something way earlier than a speculator would even get wind of it,” Blau says. “Democratizing access to asset classes is a huge part of crypto’s future.”

Google confirms it’s pulling the plug on Streams, its UK clinician support app

By Natasha Lomas

Google is infamous for spinning up products and killing them off, often in very short order. It’s an annoying enough habit when it’s stuff like messaging apps and games. But the tech giant’s ambitions stretch into many domains that touch human lives these days. Including, most directly, healthcare. And — it turns out — so does Google’s tendency to kill off products that its PR has previously touted as ‘life saving’.

To wit: Following a recent reconfiguration of Google’s health efforts — reported earlier by Business Insider — the tech giant confirmed to TechCrunch that it is decommissioning its clinician support app, Streams.

The app, which Google Health PR bills as a “mobile medical device”, was developed back in 2015 by DeepMind, an AI division of Google — and has been used by the UK’s National Health Service in the years since, with a number of Trusts inking deals with DeepMind Health to roll out Streams to their clinicians.

At the time of writing, one NHS Trust — London’s Royal Free — is still using the app in its hospitals.

But, presumably, not for too much longer since Google is in the process of taking Streams out back to be shot and tossed into its deadpool — alongside the likes of its ill-fated social network, Google+, and Internet ballon company Loon, to name just two of a frankly endless list of now defunct Alphabet/Google products.

Other NHS Trusts we contacted which had previously rolled out Streams told us they have already stopped using the app.

University College London NHS Trust confirmed to TechCrunch that it severed ties with Google Health earlier this year.

“Our agreement with Google Health (initially DeepMind) came to an end in March 2021 as originally planned. Google Health deleted all the data it held at the end of the [Streams] project,” a UCL NHS Trust spokesperson told TechCrunch.

Imperial College Healthcare NHS Trust also told us it stopped using Streams this summer (in July) — and said patient data is in the process of being deleted.

“Following the decommissioning of Streams at the Trust earlier this summer, data that has been processed by Google Health to provide the service to the Trust will be deleted and the agreement has been terminated,” a spokesperson said.

“As per the data sharing agreement, any patient data that has been processed by Google Health to provide the service will be deleted. The deletion process is started once the agreement has been terminated,” they added, saying the contractual timeframe for Google deleting patient data is six months.

Another Trust, Taunton & Somerset, also confirmed its involvement with Streams had already ended. 

The Streams deals DeepMind inked with NHS Trusts were for five years so these contracts were likely approaching the end of their terms, anyway.

Contract extensions would have had to be agreed by both parties. And Google’s decision to decommission Streams may be factoring in a lack of enthusiasm from involved Trusts to continue using the software — although if that’s the case it may, in turn, be a reflection of Trusts’ perceptions of Google’s weak commitment to the project.

Neither side is saying much publicly.

But as far as we’re aware the Royal Free is the only NHS Trust still using the clinician support app as Google prepares to cut off Stream’s life support.

No more Streams?

The Streams story has plenty of wrinkles, to put it politely.

For one thing, despite being developed by Google’s AI division — and despite DeepMind founder Mustafa Suleyman saying the goal for the project was to find ways to integrate AI into Streams so the app could generate predictive healthcare alerts — it doesn’t involve any artificial intelligence.

An algorithm in Streams alerts doctors to the risk of a patient developing acute kidney injury but relies on an existing AKI (acute kidney injury) algorithm developed by the NHS. So Streams essentially digitized and mobilized existing practice.

As a result, it always looked odd that an AI division of an adtech giant would be so interested in building, provisioning and supporting clinician support software over the long term. But then — as it panned out — neither DeepMind nor Google were in it for the long haul at the patient’s bedside.

DeepMind and the NHS Trust it worked with to develop Streams (the aforementioned Royal Free) started out with wider ambitions for their partnership — as detailed in an early 2016 memo we reported on, which set out a five year plan to bring AI to healthcare. Plus, as we noted above, Suleyman keep up the push for years — writing later in 2019 that: “Streams doesn’t use artificial intelligence at the moment, but the team now intends to find ways to safely integrate predictive AI models into Streams in order to provide clinicians with intelligent insights into patient deterioration.”

A key misstep for the project emerged in 2017 — through press reporting of a data scandal, as details of the full scope of the Royal Free-DeepMind data-sharing partnership were published by New Scientist (which used a freedom of information request to obtain contracts the pair had not made public).

The UK’s data protection watchdog went on to find that the Royal Free had not had a valid legal basis when it passed information on millions of patients’ to DeepMind during the development phase of Streams.

Which perhaps explains DeepMind’s eventually cooling ardour for a project it had initially thought — with the help of a willing NHS partner — would provide it with free and easy access to a rich supply of patient data for it to train up healthcare AIs which it would then be, seemingly, perfectly positioned to sell back into the self same service in future years. Price tbc.

No one involved in that thought had properly studied the detail of UK healthcare data regulation, clearly.

Or — most importantly — bothered to considered fundamental patient expectations about their private information.

So it was not actually surprising when, in 2018, DeepMind announced that it was stepping away from Streams — handing the app (and all its data) to Google Health — Google’s internal health-focused division — which went on to complete its takeover of DeepMind Health in 2019. (Although it was still shocking, as we opined at the time.)

It was Google Health that Suleyman suggested would be carrying forward the work to bake AI into Streams, writing at the time of the takeover that: “The combined experience, infrastructure and expertise of DeepMind Health teams alongside Google’s will help us continue to develop mobile tools that can support more clinicians, address critical patient safety issues and could, we hope, save thousands of lives globally.”

A particular irony attached to the Google Health takeover bit of the Streams saga is the fact that DeepMind had, when under fire over its intentions toward patient data, claimed people’s medical information would never be touched by its adtech parent.

Until of course it went on it hand the whole project off to Google — and then lauded the transfer as great news for clinicians and patients!

Google’s takeover of Streams meant NHS Trusts that wanted to continue using the app had to ink new contracts directly with Google Health. And all those who had rolled out the app did so. It’s not like they had much choice if they did want to continue.

Again, jump forward a couple of years and it’s Google Health now suddenly facing a major reorg — with Streams in the frame for the chop as part of Google’s perpetually reconfiguring project priorities.

It is quite the ignominious ending to an already infamous project.

DeepMind’s involvement with the NHS had previously been seized upon by the UK government — with former health secretary, Matt Hancock, trumpeting an AI research partnership between the company and Moorfield’s Eye Hospital as an exemplar of the kind of data-driven innovation he suggested would transform healthcare service provision in the UK.

Luckily for Hancock he didn’t pick Streams as his example of great “healthtech” innovation. (Moorfields confirmed to us that its research-focused partnership with Google Health is continuing.)

The hard lesson here appears to be don’t bet the nation’s health on an adtech giant that plays fast and loose with people’s data and doesn’t think twice about pulling the plug on digital medical devices as internal politics dictate another chair-shuffling reorg.

Patient data privacy advocacy group, MedConfidential — a key force in warning over the scope of the Royal Free’s DeepMind data-sharing deal — urged Google to ditch the spin and come clean about the Streams cock-up, once and for all.

“Streams is the Windows Vista of Google — a legacy it hopes to forget,” MedConfidential’s Sam Smith told us. “The NHS relies on trustworthy suppliers, but companies that move on after breaking things create legacy problems for the NHS, as we saw with wannacry. Google should admit the decision, delete the data, and learn that experimenting on patients is regulated for a reason.”

Questions over Royal Free’s ongoing app use

Despite the Information Commissioner’s Office’s 2017 finding that the Royal Free’s original data-sharing deal with DeepMind was improper, it’s notable that the London Trust stuck with Streams — continuing to pass data to DeepMind.

The original patient data-set that was shared with DeepMind without a valid legal basis was never ordered to be deleted. Nor — presumably has it since been deleted. Hence the call for Google to delete the data now.

Ironically the improperly acquired data should (in theory) finally get deleted — once contractual timeframes for any final back-up purges elapse — but only because it’s Google itself planning to switch off Streams.

The Royal Free confirmed to us that it is still using Streams, even as Google spins the dial on its commercial priorities for the umpteenth time and decides it’s not interested in this particular bit of clinician support, after all.

We put a number of questions to the Trust — including about the deletion of patient data — none of which it responded to.

Instead, two days later, it sent us this one-line statement which raises plenty more questions — saying only that: “The Streams app has not been decommissioned for the Royal Free London and our clinicians continue to use it for the benefit of patients in our hospitals.”

It is not clear how long the Trust will be able to use an app Google is decommissioning. Nor how wise that might be for patient safety — such as if the app won’t get necessary security updates, for example.

We’ve also asked Google how long it will continue to support the Royal Free’s usage — and when it plans to finally switch off the service. As well as which internal group will be responsible for any SLA requests coming from the Royal Free as the Trust continues to use software Google Health is decommissioning — and will update this report with any response. (Earlier a Google spokeswoman told us the Royal Free would continue to use Streams for the ‘near future’ — but she did not offer a specific end date.)

In press reports this month on the Google Health reorg — covering an internal memo first obtained by Business Insider —  teams working on various Google health projects were reported to be being split up to other areas, including some set to report into Google’s search and AI teams.

So which Google group will take over responsibility for the handling of the SLA with the Royal Free, as a result of the Google Health reshuffle, is an interesting question.

In earlier comments, Google’s spokeswoman told us the new structure for its reconfigured health efforts — which are still being badged ‘Google Health’ — will encompass all its work in health and wellness, including Fitbit, as well as AI health research, Google Cloud and more.

On Streams specifically, she said the app hasn’t made the cut because when Google assimilated DeepMind Health it decided to focus its efforts on another digital offering for clinicians — called Care Studio — which it’s currently piloting with two US health systems (namely: Ascension & Beth Israel Deaconess Medical Center). 

And anyone who’s ever tried to use a Google messaging app will surely have strong feelings of déjà vu on reading that…

DeepMind’s co-founder, meanwhile, appears to have remained blissfully ignorant of Google’s intentions to ditch Streams in favor of Care Studio — tweeting back in 2019 as Google completed the takeover of DeepMind Health that he had been “proud to be part of this journey”, and also touting “huge progress delivered already, and so much more to come for this incredible team”.

In the end, Streams isn’t being ‘supercharged’ (or levelled up to use current faddish political parlance) with AI — as his 2019 blog post had envisaged — Google is simply taking it out of service. Like it did with Reader or Allo or Tango or Google Play Music, or…. well, the list goes on.

Suleyman’s own story contains some wrinkles, too.

He is no longer at DeepMind but has himself been ‘folded into’ Google — joining as a VP of artificial intelligence policy, after initially being placed on an extended leave of absence from DeepMind.

In January, allegations that he had bullied staff were reported by the WSJ. And then, earlier this month, Business Insider expanded on that — reporting follow up allegations that there had been confidential settlements between DeepMind and former employees who had worked under Suleyman and complained about his conduct (although DeepMind denied any knowledge of such settlements).

In a statement to Business Insider, Suleyman apologized for his past behavior — and said that in 2019 he had “accepted feedback that, as a co-founder at DeepMind, I drove people too hard and at times my management style was not constructive”, adding that he had taken time out to start working with a coach and that that process had helped him “reflect, grow and learn personally and professionally”.

We asked Google if Suleyman would like to comment on the demise of Streams — and on his employer’s decision to kill the project — given his high hopes for the project and all the years of work he put into the health push. But the company did not engage with that request.

We also offered Suleyman the chance to comment directly. We’ll update this story if he responds.

Rocket Lab’s Mars mission gets green light from NASA

By Devin Coldewey

Rocket Lab is one step closer to going to Mars with NASA’s approval of the company’s Photon spacecraft for an upcoming science mission. If all continues according to plan the two craft will launch in 2024 and arrive on the red planet 11 months later to study its magnetosphere.

The mission is known as the Escape and Plasma Acceleration and Dynamics Explorers, or ESCAPADE (hats off to whoever worked that one out), and was proposed for a small satellite science program back in 2019, eventually being chosen as a finalist. UC Berkeley researchers are the main force behind the science part.

These satellites have to be less than 180 kilograms (about 400 pounds) and must perform standalone science missions, part of a new program aiming at more lightweight, shorter lead missions that can be performed with strong commercial industry collaboration. A few concepts have been baking since the original announcement of the program, and ESCAPADE just passed Key Decision Point C, meaning it’s ready to go from concept to reality.

This particular mission is actually a pair of satellites, a perk that no doubt contributed to its successful selection. Rocket Lab’s whole intention with the Photon program is to provide a more or less turnkey design for various space operations, from orbital work to interplanetary science missions like this one.

Interestingly, Rocket Lab won’t actually be launching the mission aboard one of its Electron rockets — the satellites will be aboard a “NASA-provided commercial launch vehicle,” which leaves it up to them. Perhaps by that time the company will be in the running for the contract, but for now Rocket Lab is only building the spacecraft, including most of the nonscientific onboard components: navigation, orientation, propulsion, etc.

“ESCAPADE is an innovative mission that demonstrates that advanced interplanetary science is now within reach for a fraction of traditional costs, and we’re proud to make it possible with Photon. We are delighted to receive the green light from NASA to proceed to flight,” said Rocket Lab founder and CEO Peter Beck in the company’s announcement of the milestone.

Rocket Lab is already under contract to lift a CubeSat to cislunar orbit for Artemis purposes, and has locked in a deal with Varda Space Industries to build that company’s spacecraft, for launch in 2023 and 2024.

Today’s real story: The Facebook monopoly

By Walter Thompson
Daniel Liss Contributor
Daniel Liss is the founder and CEO of Dispo, the digital disposable camera social network.

Facebook is a monopoly. Right?

Mark Zuckerberg appeared on national TV today to make a “special announcement.” The timing could not be more curious: Today is the day Lina Khan’s FTC refiled its case to dismantle Facebook’s monopoly.

To the average person, Facebook’s monopoly seems obvious. “After all,” as James E. Boasberg of the U.S. District Court for the District of Columbia put it in his recent decision, “No one who hears the title of the 2010 film ‘The Social Network’ wonders which company it is about.” But obviousness is not an antitrust standard. Monopoly has a clear legal meaning, and thus far Lina Khan’s FTC has failed to meet it. Today’s refiling is much more substantive than the FTC’s first foray. But it’s still lacking some critical arguments. Here are some ideas from the front lines.

To the average person, Facebook’s monopoly seems obvious. But obviousness is not an antitrust standard.

First, the FTC must define the market correctly: personal social networking, which includes messaging. Second, the FTC must establish that Facebook controls over 60% of the market — the correct metric to establish this is revenue.

Though consumer harm is a well-known test of monopoly determination, our courts do not require the FTC to prove that Facebook harms consumers to win the case. As an alternative pleading, though, the government can present a compelling case that Facebook harms consumers by suppressing wages in the creator economy. If the creator economy is real, then the value of ads on Facebook’s services is generated through the fruits of creators’ labor; no one would watch the ads before videos or in between posts if the user-generated content was not there. Facebook has harmed consumers by suppressing creator wages.

A note: This is the first of a series on the Facebook monopoly. I am inspired by Cloudflare’s recent post explaining the impact of Amazon’s monopoly in their industry. Perhaps it was a competitive tactic, but I genuinely believe it more a patriotic duty: guideposts for legislators and regulators on a complex issue. My generation has watched with a combination of sadness and trepidation as legislators who barely use email question the leading technologists of our time about products that have long pervaded our lives in ways we don’t yet understand. I, personally, and my company both stand to gain little from this — but as a participant in the latest generation of social media upstarts, and as an American concerned for the future of our democracy, I feel a duty to try.

The problem

According to the court, the FTC must meet a two-part test: First, the FTC must define the market in which Facebook has monopoly power, established by the D.C. Circuit in Neumann v. Reinforced Earth Co. (1986). This is the market for personal social networking services, which includes messaging.

Second, the FTC must establish that Facebook controls a dominant share of that market, which courts have defined as 60% or above, established by the 3rd U.S. Circuit Court of Appeals in FTC v. AbbVie (2020). The right metric for this market share analysis is unequivocally revenue — daily active users (DAU) x average revenue per user (ARPU). And Facebook controls over 90%.

The answer to the FTC’s problem is hiding in plain sight: Snapchat’s investor presentations:

Snapchat July 2021 investor presentation: Significant DAU and ARPU Opportunity

Snapchat July 2021 investor presentation: Significant DAU and ARPU Opportunity. Image CreditsSnapchat

This is a chart of Facebook’s monopoly — 91% of the personal social networking market. The gray blob looks awfully like a vast oil deposit, successfully drilled by Facebook’s Standard Oil operations. Snapchat and Twitter are the small wildcatters, nearly irrelevant compared to Facebook’s scale. It should not be lost on any market observers that Facebook once tried to acquire both companies.

The market Includes messaging

The FTC initially claimed that Facebook has a monopoly of the “personal social networking services” market. The complaint excluded “mobile messaging” from Facebook’s market “because [messaging apps] (i) lack a ‘shared social space’ for interaction and (ii) do not employ a social graph to facilitate users’ finding and ‘friending’ other users they may know.”

This is incorrect because messaging is inextricable from Facebook’s power. Facebook demonstrated this with its WhatsApp acquisition, promotion of Messenger and prior attempts to buy Snapchat and Twitter. Any personal social networking service can expand its features — and Facebook’s moat is contingent on its control of messaging.

The more time in an ecosystem the more valuable it becomes. Value in social networks is calculated, depending on whom you ask, algorithmically (Metcalfe’s law) or logarithmically (Zipf’s law). Either way, in social networks, 1+1 is much more than 2.

Social networks become valuable based on the ever-increasing number of nodes, upon which companies can build more features. Zuckerberg coined the “social graph” to describe this relationship. The monopolies of Line, Kakao and WeChat in Japan, Korea and China prove this clearly. They began with messaging and expanded outward to become dominant personal social networking behemoths.

In today’s refiling, the FTC explains that Facebook, Instagram and Snapchat are all personal social networking services built on three key features:

  1. “First, personal social networking services are built on a social graph that maps the connections between users and their friends, family, and other personal connections.”
  2. “Second, personal social networking services include features that many users regularly employ to interact with personal connections and share their personal experiences in a shared social space, including in a one-to-many ‘broadcast’ format.”
  3. “Third, personal social networking services include features that allow users to find and connect with other users, to make it easier for each user to build and expand their set of personal connections.”

Unfortunately, this is only partially right. In social media’s treacherous waters, as the FTC has struggled to articulate, feature sets are routinely copied and cross-promoted. How can we forget Instagram’s copying of Snapchat’s stories? Facebook has ruthlessly copied features from the most successful apps on the market from inception. Its launch of a Clubhouse competitor called Live Audio Rooms is only the most recent example. Twitter and Snapchat are absolutely competitors to Facebook.

Messaging must be included to demonstrate Facebook’s breadth and voracious appetite to copy and destroy. WhatsApp and Messenger have over 2 billion and 1.3 billion users respectively. Given the ease of feature copying, a messaging service of WhatsApp’s scale could become a full-scale social network in a matter of months. This is precisely why Facebook acquired the company. Facebook’s breadth in social media services is remarkable. But the FTC needs to understand that messaging is a part of the market. And this acknowledgement would not hurt their case.

The metric: Revenue shows Facebook’s monopoly

Boasberg believes revenue is not an apt metric to calculate personal networking: “The overall revenues earned by PSN services cannot be the right metric for measuring market share here, as those revenues are all earned in a separate market — viz., the market for advertising.” He is confusing business model with market. Not all advertising is cut from the same cloth. In today’s refiling, the FTC correctly identifies “social advertising” as distinct from the “display advertising.”

But it goes off the deep end trying to avoid naming revenue as the distinguishing market share metric. Instead the FTC cites “time spent, daily active users (DAU), and monthly active users (MAU).” In a world where Facebook Blue and Instagram compete only with Snapchat, these metrics might bring Facebook Blue and Instagram combined over the 60% monopoly hurdle. But the FTC does not make a sufficiently convincing market definition argument to justify the choice of these metrics. Facebook should be compared to other personal social networking services such as Discord and Twitter — and their correct inclusion in the market would undermine the FTC’s choice of time spent or DAU/MAU.

Ultimately, cash is king. Revenue is what counts and what the FTC should emphasize. As Snapchat shows above, revenue in the personal social media industry is calculated by ARPU x DAU. The personal social media market is a different market from the entertainment social media market (where Facebook competes with YouTube, TikTok and Pinterest, among others). And this too is a separate market from the display search advertising market (Google). Not all advertising-based consumer technology is built the same. Again, advertising is a business model, not a market.

In the media world, for example, Netflix’s subscription revenue clearly competes in the same market as CBS’ advertising model. News Corp.’s acquisition of Facebook’s early competitor MySpace spoke volumes on the internet’s potential to disrupt and destroy traditional media advertising markets. Snapchat has chosen to pursue advertising, but incipient competitors like Discord are successfully growing using subscriptions. But their market share remains a pittance compared to Facebook.

An alternative pleading: Facebook’s market power suppresses wages in the creator economy

The FTC has correctly argued for the smallest possible market for their monopoly definition. Personal social networking, of which Facebook controls at least 80%, should not (in their strongest argument) include entertainment. This is the narrowest argument to make with the highest chance of success.

But they could choose to make a broader argument in the alternative, one that takes a bigger swing. As Lina Khan famously noted about Amazon in her 2017 note that began the New Brandeis movement, the traditional economic consumer harm test does not adequately address the harms posed by Big Tech. The harms are too abstract. As White House advisor Tim Wu argues in “The Curse of Bigness,” and Judge Boasberg acknowledges in his opinion, antitrust law does not hinge solely upon price effects. Facebook can be broken up without proving the negative impact of price effects.

However, Facebook has hurt consumers. Consumers are the workers whose labor constitutes Facebook’s value, and they’ve been underpaid. If you define personal networking to include entertainment, then YouTube is an instructive example. On both YouTube and Facebook properties, influencers can capture value by charging brands directly. That’s not what we’re talking about here; what matters is the percent of advertising revenue that is paid out to creators.

YouTube’s traditional percentage is 55%. YouTube announced it has paid $30 billion to creators and rights holders over the last three years. Let’s conservatively say that half of the money goes to rights holders; that means creators on average have earned $15 billion, which would mean $5 billion annually, a meaningful slice of YouTube’s $46 billion in revenue over that time. So in other words, YouTube paid creators a third of its revenue (this admittedly ignores YouTube’s non-advertising revenue).

Facebook, by comparison, announced just weeks ago a paltry $1 billion program over a year and change. Sure, creators may make some money from interstitial ads, but Facebook does not announce the percentage of revenue they hand to creators because it would be insulting. Over the equivalent three-year period of YouTube’s declaration, Facebook has generated $210 billion in revenue. one-third of this revenue paid to creators would represent $70 billion, or $23 billion a year.

Why hasn’t Facebook paid creators before? Because it hasn’t needed to do so. Facebook’s social graph is so large that creators must post there anyway — the scale afforded by success on Facebook Blue and Instagram allows creators to monetize through directly selling to brands. Facebooks ads have value because of creators’ labor; if the users did not generate content, the social graph would not exist. Creators deserve more than the scraps they generate on their own. Facebook suppresses creators’ wages because it can. This is what monopolies do.

Facebook’s Standard Oil ethos

Facebook has long been the Standard Oil of social media, using its core monopoly to begin its march upstream and down. Zuckerberg announced in July and renewed his focus today on the metaverse, a market Roblox has pioneered. After achieving a monopoly in personal social media and competing ably in entertainment social media and virtual reality, Facebook’s drilling continues. Yes, Facebook may be free, but its monopoly harms Americans by stifling creator wages. The antitrust laws dictate that consumer harm is not a necessary condition for proving a monopoly under the Sherman Act; monopolies in and of themselves are illegal. By refiling the correct market definition and marketshare, the FTC stands more than a chance. It should win.

A prior version of this article originally appeared on Substack.

Two senators urge the FTC to investigate Tesla over ‘Full Self-Driving’ statements

By Aria Alamalhodaei

Two Democratic senators have asked the new chair of the Federal Trade Commission to investigate Tesla’s statements about the autonomous capabilities of its Autopilot and Full Self-Driving systems. The senators, Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.), expressed particular concern over Tesla misleading customers into thinking their vehicles are capable of fully autonomous driving.

“Tesla’s marketing has repeatedly overstated the capabilities of its vehicles, and these statements increasingly pose a threat to motorists and other users of the road,” they said. “Accordingly, we urge you to open an investigation into potentially deceptive and unfair practices in Tesla’s advertising and marketing of its driving automation systems and take appropriate enforcement action to ensure the safety of all drivers on the road.”

In their letter to new FTC Chair Lina Khan, they point to a 2019 YouTube video Tesla posted to its channel, which shows a Tesla driving autonomously. The roughly two-minute video is titled “Full Self-Driving” and has been viewed more than 18 million times.

“Their claims put Tesla drivers – and all of the travelling public – at risk of serious injury or death,” the senators wrote.

When it comes to Tesla and formal investigations, when it rains, it pours. The letter was published just two days after the National Highway Transportation Safety Administration said it had opened a preliminary investigation into incidents involving Teslas crashing into parked emergency vehicles.

Lina Khan is the youngest person to ever chair the FTC. She’s widely considered the most progressive appointment in recent history, particularly for her scholarship on antitrust law. But should the FTC choose to investigate Tesla, the case would likely have nothing to do with antitrust law and instead fall under the purview of consumer protection. The FTC has the authority to investigate false or misleading claims from companies regarding their products.

This is not the first time prominent figures have called on the FTC to open an investigation into Tesla’s claims. The Center for Auto Safety and Consumer Watchdog, two special interest groups, also sent a letter in 2018 to the commission over the marketing of Autopilot features. The following year, the NHTSA urged the FTC to investigate whether claims made by Tesla CEO Elon Musk on the Model 3’s safety “constitute[d] unfair or deceptive acts or practices.”

Tesla charges $10,000 for access to a “Full Self-Driving” option at the point of sale, or as a subscription. The company is currently testing beta version 9 of FSD with a few thousand drivers, but the senators take aim at the beta version, too. “After the [beta 9] update, drivers have posted videos online showing their updated Tesla vehicles making unexpected maneuvers that require human intervention to prevent a crash,” they write. “Mr. Musk’s tepid precautions tucked away on social media are no excuse for misleading drivers and endangering the lives of everyone on the road.”

How the law got it wrong with Apple Card

By Ram Iyer
Liz O'Sullivan Contributor
Liz O’Sullivan is CEO of Parity, a platform that automates model risk and algorithmic governance for the enterprise. She also advises the Surveillance Technology Oversight Project and the Campaign to Stop Killer Robots on all things artificial intelligence.
More posts by this contributor

Advocates of algorithmic justice have begun to see their proverbial “days in court” with legal investigations of enterprises like UHG and Apple Card. The Apple Card case is a strong example of how current anti-discrimination laws fall short of the fast pace of scientific research in the emerging field of quantifiable fairness.

While it may be true that Apple and their underwriters were found innocent of fair lending violations, the ruling came with clear caveats that should be a warning sign to enterprises using machine learning within any regulated space. Unless executives begin to take algorithmic fairness more seriously, their days ahead will be full of legal challenges and reputational damage.

What happened with Apple Card?

In late 2019, startup leader and social media celebrity David Heinemeier Hansson raised an important issue on Twitter, to much fanfare and applause. With almost 50,000 likes and retweets, he asked Apple and their underwriting partner, Goldman Sachs, to explain why he and his wife, who share the same financial ability, would be granted different credit limits. To many in the field of algorithmic fairness, it was a watershed moment to see the issues we advocate go mainstream, culminating in an inquiry from the NY Department of Financial Services (DFS).

At first glance, it may seem heartening to credit underwriters that the DFS concluded in March that Goldman’s underwriting algorithm did not violate the strict rules of financial access created in 1974 to protect women and minorities from lending discrimination. While disappointing to activists, this result was not surprising to those of us working closely with data teams in finance.

There are some algorithmic applications for financial institutions where the risks of experimentation far outweigh any benefit, and credit underwriting is one of them. We could have predicted that Goldman would be found innocent, because the laws for fairness in lending (if outdated) are clear and strictly enforced.

And yet, there is no doubt in my mind that the Goldman/Apple algorithm discriminates, along with every other credit scoring and underwriting algorithm on the market today. Nor do I doubt that these algorithms would fall apart if researchers were ever granted access to the models and data we would need to validate this claim. I know this because the NY DFS partially released its methodology for vetting the Goldman algorithm, and as you might expect, their audit fell far short of the standards held by modern algorithm auditors today.

How did DFS (under current law) assess the fairness of Apple Card?

In order to prove the Apple algorithm was “fair,” DFS considered first whether Goldman had used “prohibited characteristics” of potential applicants like gender or marital status. This one was easy for Goldman to pass — they don’t include race, gender or marital status as an input to the model. However, we’ve known for years now that some model features can act as “proxies” for protected classes.

If you’re Black, a woman and pregnant, for instance, your likelihood of obtaining credit may be lower than the average of the outcomes among each overarching protected category.

The DFS methodology, based on 50 years of legal precedent, failed to mention whether they considered this question, but we can guess that they did not. Because if they had, they’d have quickly found that credit score is so tightly correlated to race that some states are considering banning its use for casualty insurance. Proxy features have only stepped into the research spotlight recently, giving us our first example of how science has outpaced regulation.

In the absence of protected features, DFS then looked for credit profiles that were similar in content but belonged to people of different protected classes. In a certain imprecise sense, they sought to find out what would happen to the credit decision were we to “flip” the gender on the application. Would a female version of the male applicant receive the same treatment?

Intuitively, this seems like one way to define “fair.” And it is — in the field of machine learning fairness, there is a concept called a “flip test” and it is one of many measures of a concept called “individual fairness,” which is exactly what it sounds like. I asked Patrick Hall, principal scientist at bnh.ai, a leading boutique AI law firm, about the analysis most common in investigating fair lending cases. Referring to the methods DFS used to audit Apple Card, he called it basic regression, or “a 1970s version of the flip test,” bringing us example number two of our insufficient laws.

A new vocabulary for algorithmic fairness

Ever since Solon Barocas’ seminal paper “Big Data’s Disparate Impact” in 2016, researchers have been hard at work to define core philosophical concepts into mathematical terms. Several conferences have sprung into existence, with new fairness tracks emerging at the most notable AI events. The field is in a period of hypergrowth, where the law has as of yet failed to keep pace. But just like what happened to the cybersecurity industry, this legal reprieve won’t last forever.

Perhaps we can forgive DFS for its softball audit given that the laws governing fair lending are born of the civil rights movement and have not evolved much in the 50-plus years since inception. The legal precedents were set long before machine learning fairness research really took off. If DFS had been appropriately equipped to deal with the challenge of evaluating the fairness of the Apple Card, they would have used the robust vocabulary for algorithmic assessment that’s blossomed over the last five years.

The DFS report, for instance, makes no mention of measuring “equalized odds,” a notorious line of inquiry first made famous in 2018 by Joy Buolamwini, Timnit Gebru and Deb Raji. Their “Gender Shades” paper proved that facial recognition algorithms guess wrong on dark female faces more often than they do on subjects with lighter skin, and this reasoning holds true for many applications of prediction beyond computer vision alone.

Equalized odds would ask of Apple’s algorithm: Just how often does it predict creditworthiness correctly? How often does it guess wrong? Are there disparities in these error rates among people of different genders, races or disability status? According to Hall, these measurements are important, but simply too new to have been fully codified into the legal system.

If it turns out that Goldman regularly underestimates female applicants in the real world, or assigns interest rates that are higher than Black applicants truly deserve, it’s easy to see how this would harm these underserved populations at national scale.

Financial services’ Catch-22

Modern auditors know that the methods dictated by legal precedent fail to catch nuances in fairness for intersectional combinations within minority categories — a problem that’s exacerbated by the complexity of machine learning models. If you’re Black, a woman and pregnant, for instance, your likelihood of obtaining credit may be lower than the average of the outcomes among each overarching protected category.

These underrepresented groups may never benefit from a holistic audit of the system without special attention paid to their uniqueness, given that the sample size of minorities is by definition a smaller number in the set. This is why modern auditors prefer “fairness through awareness” approaches that allow us to measure results with explicit knowledge of the demographics of the individuals in each group.

But there’s a Catch-22. In financial services and other highly regulated fields, auditors often can’t use “fairness through awareness,” because they may be prevented from collecting sensitive information from the start. The goal of this legal constraint was to prevent lenders from discrimination. In a cruel twist of fate, this gives cover to algorithmic discrimination, giving us our third example of legal insufficiency.

The fact that we can’t collect this information hamstrings our ability to find out how models treat underserved groups. Without it, we might never prove what we know to be true in practice — full-time moms, for instance, will reliably have thinner credit files, because they don’t execute every credit-based purchase under both spousal names. Minority groups may be far more likely to be gig workers, tipped employees or participate in cash-based industries, leading to commonalities among their income profiles that prove less common for the majority.

Importantly, these differences on the applicants’ credit files do not necessarily translate to true financial responsibility or creditworthiness. If it’s your goal to predict creditworthiness accurately, you’d want to know where the method (e.g., a credit score) breaks down.

What this means for businesses using AI

In Apple’s example, it’s worth mentioning a hopeful epilogue to the story where Apple made a consequential update to their credit policy to combat the discrimination that is protected by our antiquated laws. In Apple CEO Tim Cook’s announcement, he was quick to highlight a “lack of fairness in the way the industry [calculates] credit scores.”

Their new policy allows spouses or parents to combine credit files such that the weaker credit file can benefit from the stronger. It’s a great example of a company thinking ahead to steps that may actually reduce the discrimination that exists structurally in our world. In updating their policies, Apple got ahead of the regulation that may come as a result of this inquiry.

This is a strategic advantage for Apple, because NY DFS made exhaustive mention of the insufficiency of current laws governing this space, meaning updates to regulation may be nearer than many think. To quote Superintendent of Financial Services Linda A. Lacewell: “The use of credit scoring in its current form and laws and regulations barring discrimination in lending are in need of strengthening and modernization.” In my own experience working with regulators, this is something today’s authorities are very keen to explore.

I have no doubt that American regulators are working to improve the laws that govern AI, taking advantage of this robust vocabulary for equality in automation and math. The Federal Reserve, OCC, CFPB, FTC and Congress are all eager to address algorithmic discrimination, even if their pace is slow.

In the meantime, we have every reason to believe that algorithmic discrimination is rampant, largely because the industry has also been slow to adopt the language of academia that the last few years have brought. Little excuse remains for enterprises failing to take advantage of this new field of fairness, and to root out the predictive discrimination that is in some ways guaranteed. And the EU agrees, with draft laws that apply specifically to AI that are set to be adopted some time in the next two years.

The field of machine learning fairness has matured quickly, with new techniques discovered every year and myriad tools to help. The field is only now reaching a point where this can be prescribed with some degree of automation. Standards bodies have stepped in to provide guidance to lower the frequency and severity of these issues, even if American law is slow to adopt.

Because whether discrimination by algorithm is intentional, it is illegal. So, anyone using advanced analytics for applications relating to healthcare, housing, hiring, financial services, education or government are likely breaking these laws without knowing it.

Until clearer regulatory guidance becomes available for the myriad applications of AI in sensitive situations, the industry is on its own to figure out which definitions of fairness are best.

Founded by a Freshworks alum, sales commission platform Everstage gets $1.7M seed funding

By Catherine Shu

A photo of Everstage founding team Vivek Suriyamoorthy and Siva Rajamani

Everstage founding team Vivek Suriyamoorthy and Siva Rajamani

For sales representatives, commissions are source of motivation—and frustration, too. Commission structures are often complex, and become even more labyrinthine as companies grow. “There is a standard fee, then there’s an implementation fee, there could be a multiplier bonus, there could be accelerators when you reach your quota and get higher quota numbers,” explained Siva Rajamani, the co-founder and chief executive officer of no-code sales commission automation platform Everstage. A lot of this is calculated on spreadsheets by finance teams, so salespeople have limited visibility into how much they are earning. 

Everstage was created to provide more transparency about sales commissions. Sales representatives and other customer-facing employees get real-time data about their performance. They can also use Everstage to forecast the potential commissions form their deals pipeline, giving them incentive to close more sales. The platform uses a modular system, so as companies grow, they can add more automated commission calculations without having to code anything.

Founded in 2020, the platform’s early customers include notable SaaS companies like Chargebee, Postman and Lamdatest. Everstage announced today that it has raised a $1.7 million seed round led by 3One4 Capital. Angel investors included Rippling co-founder Prasanna Sankar; Chargebee co-founders Krish Subramanian and Rajaraman Santhanam; Freshworks chief revenue officer Sidharth Malik; Conga chief technology officer Koti Reddy; Ally.io CEO Vetri Vellore; and RFPIO CEO Ganesh Shankar. The company is headquartered in Wilmington, Delaware, with an office in Chennai, India.

Before starting Everstage with Vivek Suriyamoorthy, the startup’s chief technology officer, Rajamani served as the head of business SaaS provider Freshworks’ global revenue operations team, working closely with sales representatives. During his tenure, Freshworks’ annual recurring revenue grew from $30 million a year to $300 million. 

Rajamani told TechCrunch that a lot of early-stage startups put more than 10% of their budget toward commissions. 

“Commissions obviously motivate and drive performance from these teams, but what started as a way to motivate gradually became a point of distrust because none of these folks had visibility into how they’re getting paid,” he said. “Plans got complex, and pretty much any salesperson you speak to at these companies will say they have their own shadow accounting.” 

“Shadow accounting” means salespeople keep their own records of the deals they close and calculate commissions by themselves. But the process can be tedious, especially if their calculations don’t match financial teams. This problem has been exacerbated by the COVID-19 pandemic, because salespeople can’t walk over to the finance department and ask about the status of their commissions.

Sometimes this results in high turnover as frustrated salespeople leave for companies that offer them more clarity about their earnings potential. 

“People want to chose companies that are very transparent about their commissions process and how their quotas are set, because that’s the only way they can assure themselves they can make money,” Rajamani said. “Otherwise promises made on paper are just on paper.” 

A screenshot of Everstage's dashboard for sales representatives, with data about their commissions

Everstage’s dashboard for sales representatives

Having visibility also motivates people. Everstage’s gamification features allows salespeople to look at their current quotas and deals pipeline, and forecast how much they can potentially earn, broken down by their commission plans, including deal attainment, implementation fees and multiplier bonuses. 

“At the end of the day, incentives and particular criteria, like multiplier bonuses, drive salespeople to close more contracts, or multi-year contracts,” Rajamani said. “If they can see how much more commission they can make if they close a deal as a multi-year contract, then it’s an added incentive.” 

Everstage is currently works primarily with SaaS companies, but also sees inbound interest from insurance, real estate, pharmaceuticals and biotech. The platform is sector-agnostic and can be customized for different types of commission plans. 

Rajamani notes that sales commissions management software is not a new area. For example, players like SAP CallidusCloud and Xactly have been around for years. 

Over the last year, newer sales commissions platforms have also raised, including CaptivateIQ and Spiff. This is in part because many high-growth companies have adjusted incentives for their sales team as they work remotely during the pandemic. 

There is some overlap between Everstage’s features and its competitors, but its main differentiators are its modular approach and emphasis on gamification. “We want to move away from automating busywork for revenue operations and finance, and move towards the gamification aspect, so the gamification is an additional module,” he said. “Other indicators—quota setting, target setting and overall company targets broken down by percentage, managers, regions—all those are also there.” 

In a statement about the funding, 3One4 Capital Anurag Ramdasan said, “As customer acquisition and retention have increased in complexity with more roles and workflows than ever, Rev Ops teams have become mandatory to align incentives and drive revenue. With their considerable experience, the Everstage founding team is well-positioned to help Rev Ops teams succeed, starting with real-time commission planning and visibility, and we’re excited to partner with them.” 

 

Feds dismantle Blue Origin and Dynetics protests of NASA’s SpaceX lunar lander award

By Devin Coldewey

Blue Origin and Dynetics are still steaming over NASA’s decision to award only one contract — to SpaceX — to build a Human Landing System for the Artemis program. Their protest of the decision was recently rejected, and now the Government Accountability Office’s arguments, which Blue Origin publicly questioned, are available for all to read. Here are a few highlights from the point-by-point takedown of the losing companies’ complaints.

In case you can’t quite remember (2020 was a long year), NASA originally selected the three companies mentioned for early funding to conceptualize and propose a lunar landing system that could put boots on the moon in 2024. They suggested the next step would be, if possible, to pick two proposals to move forward with. But when the time for awards rolled around, only SpaceX walked away with a contract.

Dynetics and Blue Origin protested the decision separately, but on similar grounds: First, NASA should have awarded two companies as promised, and not doing so is risky and anti-competition. Second, it should have adjusted the terms of the award process when it learned it didn’t have much budget to set aside for it. Third, NASA didn’t evaluate the proposals fairly, showing a bias to SpaceX and against the others in various ways.

The GAO puts all of these concerns to bed in its report — and in the process makes Blue Origin’s follow-up complaint, that the agency’s “limited jurisdiction” meant it couldn’t adequately address the protests, look like the sour grapes it is.

One and done

Image Credits: SpaceX

As to awarding one rather than two companies a contract, the answer is right there in black and white. The announcement clearly stated multiple times that the whole thing was contingent on having enough money in the first place. NASA may have preferred, hoped, even expected to award two contracts, but it was very clear that it would be awarding “up to two” or “one or more” of them. After all, what if only one met the requirements and the others didn’t? Would NASA be obligated to throw money at an unsuitable applicant? No, and that’s more or less what happened.

From the report:

Even where a solicitation contains an intention to make multiple awards, we have recognized that an agency is not required to do so if the outcome of proposal evaluation dictates that only one contract should be awarded. For example, regardless of an agency’s intention, it cannot, in making contract awards, exceed the funds available.

The GAO explains that the decision-making process at NASA weighted the technical approach the highest, then price, then management (i.e. organization, scheduling, etc.). Each company’s proposal was evaluated independently on each of these characteristics, and the final results were compared. Here’s a top-level summary of the ratings assigned:

Chart showing evaluations for SpaceX, Blue Origin, and Dynetics. SpaceX comes out on top and with the lowest price.

Image Credits: GAO / NASA

And the report again:

The technical approach factor was to be more important than the total evaluated price factor, which in turn was to be more important than the management approach factor; the non-price factors, when combined, were significantly more important than price.

…Contrary to the protesters’ arguments, even assuming a comparative analysis was required, SpaceX’s proposal appeared to be the highest-rated under each of the three enumerated evaluation criteria as well as the lowest priced.

When the budget for NASA was finalized, it left less for the HLS program than expected, and the agency was forced to make some tough choices. Luckily they had a proposal that was as good or better than the others technically (the most important factor), considerably better than the others organizationally and came in at a very reasonable cost. It was a clear choice to award a contract to SpaceX.

But having done so, NASA found that the cupboard was bare. Even so, Blue Origin argued that it deserved to be contacted about somehow making it work. Perhaps, they suggested, if NASA had come to them to negotiate, they could have put together a proposal that might have looked even better than SpaceX’s. (Jeff Bezos’s brazen after-the-fact $2 billion offer suggests they had some wiggle room.)

NASA, however, had already concluded otherwise, as the GAO confirms:

…The agency concluded that it was not “insurmountable” to negotiate with SpaceX to shift approximately $[DELETED] in FY2021 proposed milestone payments (or approximately [DELETED] percent of the $2.941 billion total proposed price) to later years to meet NASA’s FY2021 funding limitations. In contrast, the SSA concluded that it was implausible for Blue Origin ($5.995 billion) and Dynetics ($9.082 billion) to materially reduce their significantly higher total proposed prices without material revisions to their respective technical and management approaches…

Redactions notwithstanding, it’s not difficult to see the issue here. While it was conceivable, even reasonable, for SpaceX to shift a few hundred million or so around to make the fiscal math work, already questionable at the $3 billion mark, it was not conceivable that Blue Origin or Dynetics could shave half or more off their costs to make those same fiscal milestones happen.

As NASA’s selection group explained at the time:

After accounting for a contract award to SpaceX, the amount of remaining available funding is so insubstantial that, in my opinion, NASA cannot reasonably ask Blue Origin to lower its price for the scope of work it has proposed to a figure that would potentially enable NASA to afford making a contract award to Blue Origin.

Blue Origin complained that NASA should have warned them that the budget might lead to restrictions in the selection process, but the GAO simply notes that not only is the federal budget hardly secret, but that the companies waited until after the award was made to raise an issue. Such complaints need to be timely in order to be taken seriously, it wrote, and furthermore there is nothing in the complaints that suggests that even had NASA warned them, anything would have turned out differently.

There is also the question of whether choosing a single provider is “anticompetitive and unduly risky,” as the protests put it. While the GAO admits that “these important questions of policy may merit further public debate,” the complaint is moot since NASA didn’t have the money to do more than one in the first place. As voters and advocates of a generous budget for space exploration, we may say it’s a shame that NASA didn’t have $6 billion more to play with, but that doesn’t mean the agency’s decision to put the money it had to the best purpose possible was incorrect.

In space, no one can hear you cry

Image Credits: Joe Raedle / Getty Images

Blue Origin and Dynetics alleged that the process was biased in favor of SpaceX in that the various companies were unfairly evaluated for strengths and weaknesses. But the GAO sees these complaints for the fluff that they are.

In one case, Blue Origin complains that the announcement did not specifically require the vehicles to be able to land in the dark. Well, first of all, it does, and second of all, space is dark. If your design doesn’t take that into account, you’re gonna have a bad time out there.

In another instance, the communications systems proposed by Blue Origin and SpaceX both were flagged for not meeting certain requirements — but Blue Origin got a “significant weakness” listed for its system and SpaceX only got a “weakness.” Evidence of preferential treatment, they suggest.

Not so much, says the GAO: “Even a cursory review of the evaluation record demonstrates material differences between the proposals that support NASA’s different evaluation findings.” In this case four of Blue Origin’s communications links didn’t work as required, and a fifth was a maybe. SpaceX only had two not work right. This sort of substantial difference appeared in each of the objections cited by the complainants.

In fact, the report goes on to say:

We note that Blue Origin fails to rebut any of the analysis presented by the contracting officer with respect to Blue Origin’s or SpaceX’s proposals. In fact, Blue Origin initially challenged the agency’s evaluation of its own proposal, but then affirmatively withdrew that protest ground after receipt of the agency report.

Blue Origin groused that SpaceX got extra points for a design that focused on the crew’s safety, health and comfort, despite many of the design choices not being explicitly required. The GAO says NASA is well within its discretion as an expert agency to consider these as meritorious — in fact, they call it a “representative example of why discretion is due” in such cases — and really, if you’re objecting on the grounds that the competition’s capsule was too nice, it may be advisable to reconsider your priorities.

Image Credits: Blue Origin

Even had several of the decisions been successfully challenged, it wouldn’t have changed the outcome, the report explains.

SpaceX received the following evaluation totals:

  • Technical: 3 significant strengths; 10 strengths; 6 weaknesses; and 1 significant weakness
  • Management: 2 significant strengths; 3 strengths; and 2 weaknesses

While Blue Origin received the following:

  • Technical: 13 strengths; 14 weaknesses; and 2 significant weaknesses
  • Management: 1 significant strength; 2 strengths; and 6 weaknesses

It’s never a pleasant occasion to find one has been thoroughly beaten on practically every factor that counts, but that really seems to be the factor here. Dynetics and its complaint meet the same fate, by the way, but with a bit more rough treatment.

…Even allowing for the possibility that the protesters could prevail on some small subset of their challenges to NASA’s evaluation, the record reflects that NASA’s evaluation was largely reasonable, and the relative competitive standing of the offerors under the non-price factors would not materially change…

The protests are denied.

It’s a pretty brutal documentation of the shortcomings of Blue Origin and Dynetics, and one that would not have been necessary had the companies taken their lumps and accepted that NASA isn’t out to get them. They lost fair and square, and now they look like whiny also-rans instead of ambitious could-bes.

44.01 secures $5M to turn billions of tons of carbon dioxide to stone

By Devin Coldewey

Reducing global greenhouse gas emissions is an important goal, but another challenge awaits: lowering the levels of CO2 and other substances already in the atmosphere. One promising approach turns the gas into an ordinary mineral through entirely natural processes; 44.01 hopes to perform this process at scale using vast deposits of precursor materials and a $5 million seed round to get the ball rolling.

The process of mineralizing CO2 is well known among geologists and climate scientists. A naturally occurring stone called peridotite reacts with the gas and water to produce calcite, another common and harmless mineral. In fact this has occurred at enormous scales throughout history, as witnessed by large streaks of calcite piercing peridotite deposits.

Peridotite is normally found miles below sea level, but on the easternmost tip of the Arabian peninsula, specifically the northern coast of Oman, tectonic action has raised hundreds of square miles of the stuff to the surface.

Talal Hasan was working in Oman’s sovereign investment arm when he read about the country’s coast having the largest “dead zone” in the world, a major contributor to which was CO2 emissions being absorbed by the sea and gathering there. Hasan, born into a family of environmentalists, looked into it and found that, amazingly, the problem and the solution were literally right next to each other: the country’s mountains of peridotite, which theoretically could hold billions of tons of CO2.

Around that time, in fact, The New York Times ran a photo essay about Oman’s potential miracle mineral, highlighting the research of Peter Kelemen and Juerg Matter into its potential. As the Times’ Henry Fountain wrote at the time:

If this natural process, called carbon mineralization, could be harnessed, accelerated and applied inexpensively on a huge scale — admittedly some very big “ifs” — it could help fight climate change.

That’s broadly speaking the plan proposed by Hasan and, actually, both Kelemen and Matter, who make up the startup’s “scientific committee.” 44.01 (the molecular weight of carbon dioxide, if you were wondering) aims to accomplish mineralization economically and safely with a few novel ideas.

First is the basic process of accelerating the natural reaction of the materials. It normally occurs over years as CO2 and water vapor interact with the rock — no energy needs to be applied to make the change, since the reaction actually results in a lower energy state.

“We’re speeding it up by injecting a higher CO2 content than you would get in the atmosphere,” said Hasan. “We have to drill an engineered borehole that’s targeted for mineralization and injection.”

Diagram showing how carbon can be sequestered as a mineral.

Image Credits: 44.01

The holes would maximize surface area, and highly carbonated water would be pumped in cyclically until the drilled peridotite is saturated. Importantly, there’s no catalyst or toxic additive, it’s just fizzy water, and if some were to leak or escape, it’s just a puff of CO2, like what you get when you open a bottle of soda.

Second is achieving this without negating the entire endeavor by having giant trucks and heavy machinery pumping out new CO2 as fast as they can pump in the old stuff. To that end Hasan said the company is working hard at the logistics side to create a biodiesel-based supply line (with Wakud) to truck in the raw material and power the machines at night, while solar would offset that fuel cost at night.

It sounds like a lot to build up, but Hasan points out that a lot of this is already done by the oil industry, which as you might guess is fairly ubiquitous in the region. “It’s similar to how they drill and explore, so there’s a lot of existing infrastructure for this,” he said, “but rather than pulling the hydrocarbon out, we’re pumping it back in.” Other mineralization efforts have broken ground on the concept, so to speak, such as a basalt-injection scheme up in Iceland, so it isn’t without precedent.

Third is sourcing the CO2 itself. The atmosphere is full of it, sure, but it’s not trivial to capture and compress enough to mineralize at industrial scales. So 44.01 is partnering with Climeworks and other carbon capture companies to provide an end point for their CO2 sequestration efforts.

Plenty of companies are working on direct capture of emissions, be they at the point of emission or elsewhere, but once they have a couple million tons of CO2, it’s not obvious what to do next. “We want to facilitate carbon capture companies, so we’re building the CO2 sinks here and operating a plug and play model. They come to our site, plug in, and using power on site, we can start taking it,” said Hasan.

How it would be paid for is a bit of an open question in the exact particulars, but what’s clear is a global corporate appetite for carbon offsetting. There’s a large voluntary market for carbon credits beyond the traditional and rather outdated carbon credits. 44.01 can sell large quantities of verified carbon removal, which is a step up from temporary sequestration or capture — though the financial instruments to do so are still being worked out. (DroneSeed is another company offering a service beyond offsets that hopes to take advantage of a new generation of emissions futures and other systems. It’s an evolving and highly complex overlapping area of international regulations, taxes and corporate policy.)

For now, however, the goal is simply to prove that the system works as expected at the scales hoped for. The seed money is nowhere near what would be needed to build the operation necessary, just a step in that direction to get the permits, studies and equipment necessary to properly perform demonstrations.

“We tried to get like-minded investors on board, people genuinely doing this for climate change,” said Hasan. “It makes things a lot easier on us when we’re measured on impact rather than financials.” (No doubt all startups hope for such understanding backers.)

Apollo Projects, a early-stage investment fund from Max and Sam Altman, led the round, and Breakthrough Energy Ventures participated. (Not listed in the press release but important to note, Hasan said, were small investments from families in Oman and environmental organizations in Europe.)

Oman may be the starting point, but Hasan hinted that another location would host the first commercial operations. While he declined to be specific, one glance at a map shows that the peridotite deposits spill over the northern border of Oman and into the eastern tip of the UAE, which no doubt is also interested in this budding industry and, of course, has more than enough money to finance it. We’ll know more once 44.01 completes its pilot work.

Tech leaders can be the secret weapon for supercharging ESG goals

By Ram Iyer
Jeff Sternberg Contributor
Jeff Sternberg is a technical director in the Office of the CTO (OCTO) at Google Cloud, a team of technologists and industry experts that help Google Cloud's customers solve challenging problems and disrupt their industries.

Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.

What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.

Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.

Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.

CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.

Reducing environmental impact

As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.

Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.

Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.

So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”

Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.

Make social impact

Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.

Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.

When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.

Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.

It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.

Impact governance

Promoting governance does not stop with the board and CEO; CTOs play an important role, too.

Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.

It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.

These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.

EU hits Amazon with record-breaking $887M GDPR fine over data misuse

By Carly Page

Luxembourg’s National Commission for Data Protection (CNPD) has hit Amazon with a record-breaking €746 million ($887m) GDPR fine over the way it uses customer data for targeted advertising purposes.

Amazon disclosed the ruling in an SEC filing on Friday in which it slammed the decision as baseless and added that it intended to defend itself “vigorously in this matter.”

“Maintaining the security of our customers’ information and their trust are top priorities,” an Amazon spokesperson said in a statement. “There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed.

“We strongly disagree with the CNPD’s ruling, and we intend to appeal. The decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law, and the proposed fine is entirely out of proportion with even that interpretation.”

The penalty is the result of a 2018 complaint by French privacy rights group La Quadrature du Net, a group that claims to represent the interests of thousands of Europeans to ensure their data isn’t used by big tech companies to manipulate their behavior for political or commercial purposes. The complaint, which also targets Apple, Facebook Google and LinkedIn and was filed on behalf of more than 10,000 customers, alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.

La Quadrature du Net welcomed the fine issued by the CNPD, which “comes after three years of silence that made us fear the worst.”

“The model of economic domination based on the exploitation of our privacy and free will is profoundly illegitimate and contrary to all the values that our democratic societies claim to defend,” the group added in a blog post published on Friday.

The CNPD has also ruled that Amazon must commit to changing its business practices. However, the regulator has not publicly committed on its decision, and Amazon didn’t specify what revised business practices it is proposing.

The record penalty, which trumps the €50 million GDPR penalty levied against Google in 2019, comes amid heightened scrutiny of Amazon’s business in Europe. In November last year, the European Commission announced formal antitrust charges against the company, saying the retailer has misused its position to compete against third-party businesses using its platform. At the same time, the Commission a second investigation into its alleged preferential treatment of its own products on its site and those of its partners.

True ‘shift left and extend right’ security requires empowered developers

By Ram Iyer
Idan Plotnik Contributor
Idan Plotnik is the CEO and founder of Apiiro, a code risk platform.

DevOps is fundamentally about collaboration and agility. Unfortunately, when we add security and compliance to the picture, the message gets distorted.

The term “DevSecOps” has come into fashion the past few years with the intention of seamlessly integrating security and compliance into the DevOps framework. However, the reality is far from the ideal: Security tools have been bolted onto the existing DevOps process along with new layers of automation, and everyone’s calling it “DevSecOps.” This is a misguided approach that fails to embrace the principles of collaboration and agility.

Integrating security into DevOps to deliver DevSecOps demands changed mindsets, processes and technologies. Security and risk management leaders must adhere to the collaborative, agile nature of DevOps for security testing to be seamless in development, making the “Sec” in DevSecOps transparent. — Neil MacDonald, Gartner

In an ideal world, all developers would be trained and experienced in secure coding practices from front end to back end and be skilled in preventing everything from SQL injection to authorization framework exploits. Developers would also have all the information they need to make security-related decisions early in the design phase.

If a developer is working on a type of security control they haven’t worked on before, an organization should provide the appropriate training before there is a security issue.

Once again, the reality falls short of the ideal. While CI/CD automation has given developers ownership over the deployment of their code, those developers are still hampered by a lack of visibility into relevant information that would help them make better decisions before even sitting down to write code.

The entire concept of discovering and remediating vulnerabilities earlier in the development process is already, in some ways, out of date. A better approach is to provide developers with the information and training they need to prevent potential risks from becoming vulnerabilities in the first place.

Consider a developer that is assigned to add PII fields to an internet-facing API. The authorization controls in the cloud API gateway are critical to the security of the new feature. “Shifting left and extending right” doesn’t mean that a scanning tool or security architect should detect a security risk earlier in the process — it means that a developer should have all the context to prevent the vulnerability before it even happens. Continuous feedback is key to up-leveling the security knowledge of developers by orders of magnitude.

European Investment Fund puts $30M in Fabric Ventures’ new $130M digital assets fund

By Mike Butcher

Despite their rich engineering talent, Blockchain entrepreneurs in the EU often struggle to find backing due to the dearth of large funds and investment expertise in the space. But a big move takes place at an EU level today, as the European Investment Fund makes a significant investment into a blockchain and digital assets venture fund.

Fabric Ventures, a Luxembourg-based VC billed as backing the “Open Economy” has closed $130 million for its 2021 fund, $30 million of which is coming from the European Investment Fund (EIF). Other backers of the new fund include 33 founders, partners, and executives from Ethereum, (Transfer)Wise, PayPal, Square, Google, PayU, Ledger, Raisin, Ebury, PPRO, NEAR, Felix Capital, LocalGlobe, Earlybird, Accelerator Ventures, Aztec Protocol, Raisin, Aragon, Orchid, MySQL, Verifone, OpenOcean, Claret Capital, and more. 

This makes it the first EIF-backed fund mandated to invest in digital assets and blockchain technology.

EIF Chief Executive Alain Godard said:  “We are very pleased to be partnering with Fabric Ventures to bring to the European market this fund specializing in Blockchain technologies… This partnership seeks to address the need [in Europe] and unlock financing opportunities for entrepreneurs active in the field of blockchain technologies – a field of particular strategic importance for the EU and our competitiveness on the global stage.”

The subtext here is that the EIF wants some exposure to these new, decentralized platforms, potentially as a bulwark against the centralized platforms coming out of the US and China.

And yes, while the price of Bitcoin has yo-yo’d, there is now $100 billion invested in the decentralized finance sector and $1.5 billion market in the NFT market. This technology is going nowhere.

Fabric hasn’t just come from nowhere, either. Various Fabric Ventures team members have been involved in Orchestream, the Honeycomb Project at Sun Microsystems, Tideway, RPX, Automic, Yoyo Wallet, and Orchid.

Richard Muirhead is Managing Partner, and is joined by partners Max Mersch and Anil Hansjee. Hansjee becomes General Partner after leaving PayPal’s Venture Fund, which he led for EMEA. The team has experience in token design, market infrastructure, and community governance.

The same team started the Firestartr fund in 2012, backing Tray.io, Verse, Railsbank, Wagestream, Bitstamp, and others.

Muirhead said: “It is now well acknowledged that there is a need for a web that is user-owned and, consequently, more human-centric. There are astonishing people crafting this digital fabric for the benefit of all. We are excited to support those people with our latest fund.”

On a call with TechCrunch Muirhead added: “The thing to note here is that there’s a recognition at European Commission level, that this area is one of geopolitical significance for the EU bloc. On the one hand, you have the ‘wild west’ approach of North America, and, arguably, on the other is the surveillance state of the Chinese Communist Party.”

He said: “The European Commission, I think, believes that there is a third way for the individual, and to use this new wave of technology for the individual. Also for businesses. So we can have networks and marketplaces of individuals sharing their data for their own benefit, and businesses in supply chains sharing data for their own mutual benefits. So that’s the driving view.”

Democratic bill would suspend Section 230 protections when social networks boost anti-vax conspiracies

By Taylor Hatmaker

Two Democratic senators introduced a bill Thursday that would strip away the liability shield that social media platforms hold dear when those companies boost anti-vaccine conspiracies and other kinds of health misinformation.

The Health Misinformation Act, introduced by Senators Amy Klobuchar (D-MN) and Ben Ray Luján (D-NM), would create a new carve-out in Section 230 of the Communications Decency Act to hold platforms liable for algorithmically-promoted health misinformation and conspiracies. Platforms rely on Section 230 to protect them from legal liability for the vast amount of user-created content they host.

“For far too long, online platforms have not done enough to protect the health of Americans,” Klobuchar said. “These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

The bill would specifically alter Section 230’s language to revoke liability protections in the case of “health misinformation that is created or developed through the interactive computer service” if that misinformation is amplified through an algorithm. The proposed exception would only kick in during a declared national public health crisis, like the advent of Covid-19, and wouldn’t apply in normal times. The bill would task the Secretary of the Department of Health and Human Services (HHS) with defining health misinformation.

“Features that are built into technology platforms have contributed to the spread of misinformation and disinformation, with social media platforms incentivizing individuals to share content to get likes, comments, and other positive signals of engagement, which rewards engagement rather than accuracy,” the bill reads.

The bill also makes mention of the “disinformation dozen” — just twelve people, including anti-vaccine activist Robert F. Kennedy Jr. and a grab bag of other conspiracy theorists, who account for a massive swath of the anti-vax misinformation ecosystem. Many of the individuals on the list still openly spread their messaging through social media accounts on Twitter, Facebook and other platforms.

Section 230’s defenders generally view the idea of new carve-outs to the law as dangerous. Because Section 230 is such a foundational piece of the modern internet, enabling everything from Yelp and Reddit to the comment section below this post, they argue that the potential for unforeseen second order effects means the law should be left intact.

But some members of Congress — both Democrats and Republicans — see Section 230 as a valuable lever in their quest to regulate major social media companies. While the White House is pursuing its own path to craft consequences for overgrown tech companies through the Justice Department and the FTC, Biden’s office said earlier this week that the president is “reviewing” Section 230 as well. But as Trump also discovered, weakening Section 230 is a task that only Congress is positioned to accomplish — and even that is still a long shot.

While the new Democratic bill is narrowly targeted as far as proposed changes to Section 230 go, it’s also unlikely to attract bipartisan support.

Republicans are also interest in stripping away some of Big Tech’s liability protections, but generally hold the view that platforms remove too much content rather than too little. Republicans are also more likely to sow misinformation about the Covid-19 vaccines themselves, framing vaccination as a partisan issue. Whether the bill goes anywhere or not, it’s clear that an alarming portion of Americans have no intention of getting vaccinated — even with a much more contagious variant on the rise and colder months on the horizon.

“As COVID-19 cases rise among the unvaccinated, so has the amount of misinformation surrounding vaccines on social media,” Luján said of the proposed changes to Section 230. “Lives are at stake.”

❌