FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

How to lead a digital transformation — ethically

By Annie Siebert
Angela Love Contributor
Angela Love is the founder of The Daymark Group, a leadership development consulting firm — she helps create clarity and success for leaders and teams in startups to Fortune 50 companies.

The fact that COVID-19 accelerated the need for digital transformation across virtually all sectors is old news. What companies are doing to propel success under the circumstances has been under the spotlight. However, how they do it has managed to find a place in the shadows.

Simply put, the explosive increase in innovation and adoption of digital solutions shouldn’t be allowed to take place at the expense of ethical considerations.

This is about morals — but it’s also about the bottom line. Stakeholders, both internal and external, are increasingly intolerant of companies that blur (or ignore) ethical lines. These realities add up to a need for leaders to embrace an all-new learning curve: How to engage in digital transformation that includes ethics by design.

Simply put, the explosive increase in innovation and adoption of digital solutions shouldn’t be allowed to take place at the expense of ethical considerations.

Ethics as an afterthought is asking for problems

It’s easy to rail against the evils of the executive lifestyle or golden parachuting, but more often than not, a pattern of ethics violations arises from companywide culture, not leadership alone. Ideally, employees act ethically because it aligns with their personal values. However, at a minimum, they should understand the risk that an ethical breach represents to the organization.

In my experience, those conversations are not being held. Call it poor communication or lack of vision, but most companies rarely model potential ethical risks — at least not openly. If those discussions take place, they’re typically between members of upper management, behind closed doors.

Why don’t ethical concerns get more of a “town hall” treatment? The answer may come down to an unwillingness to let go of traditional thinking about business hierarchies. It could also be related to the strong (and ironically, toxic) cultural message that positivity rules. Case in point: I’ve listened to leaders say they want to create a culture of disruptive thinking — only to promptly tell an employee who speaks up that they “lack a growth mindset.”

What’s the answer, then? There are three solutions I’ve found to be effective:

  1. Making ethics a core value of the organization.
  2. Embracing transparency.
  3. Proactively developing strategies to contend with ethical challenges and violations.

These simple solutions are a great starting point to solve ethics issues regarding digital transformation and beyond. They cause leaders to look into the heart of the company and make decisions that will impact the organization for years to come.

Interpersonal dynamics are a concern in the digital transformation arena

Making digital shifts is, by nature, a technical operation. It requires personnel with advanced and varied expertise in areas such as AI and data operations. Leaders in the digital transformation space are expected to possess enough cross-domain competency to tackle tough problems.

That’s a big ask — bringing a host of technically minded people together can easily lead to a culture of expertise arrogance that leaves people who don’t know the lingo intimidated and reluctant to ask questions.

Digital transformation isn’t simply about infrastructure or tools. It is, at its heart, about change management, and a multifunctional approach is needed to ensure a healthy transition. The biggest mistake companies can make is assuming that only technical experts should be at the table. The silos that are built as a result inevitably turn into echo chambers — the last place you want to hold a conversation about ethics.

In the rush to go digital, regardless of how technical the problem, the solution will still be a fundamentally human-centric one.

Ethical digital transformation needs a starting point

Not all ethical imperatives related to digital transformation are as debatable as the suggestion that it should be people-first; some are much more black and white, like the fact that you have to start somewhere to get anywhere.

Luckily, “somewhere” doesn’t have to be from scratch. Government, risk and compliance (GRC) standards can be used to create a highly structured framework that’s mostly closed to interpretation and provides a solid foundation for building out and adopting digital solutions.

The utility of GRC models applies equally to startup multinationals and offers more than just a playbook; thoughtful application of GRC standards can also help with leadership evaluation, progress reports and risk analysis. Think of it like using bowling bumpers — they won’t guarantee you roll a strike, but they’ll definitely keep the ball out of the gutter.

Of course, a given company might not know how to create a GRC-based framework (just like most of us would be at a loss if tasked with building a set of bowling bumpers). This is why many turn to providers like IBM OpenPages, COBIT and ITIL for prefab foundations. These “starter kits” all share a single goal: Identify policies and controls that are relevant to your industry or organization and draw lines from those to pivotal compliance points.

Although getting started with the GRC process is typically cloud-based and at least partially automated, it requires organizationwide input and transparency. It can’t be effectively run by specific departments, or in a strictly top-down fashion. In fact, the single most important thing to understand about implementing GRC standards is that it will almost certainly fail unless both an organization’s leadership and broader culture fully support the direction in which it points.

An ethics-first mindset protects employees and the bottom line

Today’s leaders — executives, entrepreneurs, influencers and more — can’t be solely concerned with “winning” the digital race. Arguably, transformation is more of a marathon than a sprint, but either way, technique matters. In pursuing the end goal of competitive advantage, the how and why matter just as much as the what.

This is true for all arms of an organization. Internal stakeholders such as owners and employees risk their careers and reputations by tolerating a peripheral approach to ethics. External stakeholders like customers, investors and suppliers have just as much to lose. Their mutual understanding of this fact is what’s behind the collective, cross-industry push for transparency.

We’ve all seen the massive blowback against individuals and brands in the public eye who allow ethical lapses on their watch. It’s impossible to fully eliminate the risk of experiencing something similar, but it is a risk that can be managed. The danger is in letting the “tech blinders” of digital transformation interfere with your view of the big picture.

Companies that want to mitigate that risk and rise to the challenges of the digital era in a truly ethical way need to start by simply having conversations about what ethics, transparency and inclusivity mean — both in and around the organization. They need to follow up those conversations with action where necessary, and with open-mindedness across the board.

It’s smart to be worried about innovation lag in a time when enterprise is moving and shifting faster than ever, but there is time to make all the proper ethical considerations. Failing to do so will only derail you down the line.

Europe charges Apple with antitrust breach, citing Spotify App Store complaint

By Natasha Lomas

The European Commission has announced that it’s issued formal antitrust charges against Apple, saying today that its preliminary view is Apple’s app store rules distort competition in the market for music streaming services by raising the costs of competing music streaming app developers.

The Commission begun investigating competition concerns related to iOS App Store (and also Apple Pay) last summer.

“The Commission takes issue with the mandatory use of Apple’s own in-app purchase mechanism imposed on music streaming app developers to distribute their apps via Apple’s App Store,” it wrote today. “The Commission is also concerned that Apple applies certain restrictions on app developers preventing them from informing iPhone and iPad users of alternative, cheaper purchasing possibilities.”

Commenting in a statement, EVP and competition chief Margrethe Vestager, said: “App stores play a central role in today’s digital economy. We can now do our shopping, access news, music or movies via apps instead of visiting websites. Our preliminary finding is that Apple is a gatekeeper to users of iPhones and iPads via the App Store. With Apple Music, Apple also competes with music streaming providers. By setting strict rules on the App store that disadvantage competing music streaming services, Apple deprives users of cheaper music streaming choices and distorts competition. This is done by charging high commission fees on each transaction in the App store for rivals and by forbidding them from informing their customers of alternative subscription options.”

Apple sent us this statement in response:

“Spotify has become the largest music subscription service in the world, and we’re proud for the role we played in that. Spotify does not pay Apple any commission on over 99% of their subscribers, and only pays a 15% commission on those remaining subscribers that they acquired through the App Store. At the core of this case is Spotify’s demand they should be able to advertise alternative deals on their iOS app, a practice that no store in the world allows. Once again, they want all the benefits of the App Store but don’t think they should have to pay anything for that. The Commission’s argument on Spotify’s behalf is the opposite of fair competition.”

Vestager is due to hold a press conference shortly — so stay tuned for updates.

This story is developing… 

A number of complaints against Apple’s practices have been lodged with the EU’s competition division in recent years — including by music streaming service Spotify; video games maker Epic Games; and messaging platform Telegram, to name a few of the complainants who have gone public (and been among the most vocal).

The main objection is over the (up to 30%) cut Apple takes on sales made through third parties’ apps — which critics rail against as an ‘Apple tax’ — as well as how it can mandate that developers do not inform users how to circumvent its in-app payment infrastructure, i.e. by signing up for subscriptions via their own website instead of through the App Store. Other complaints include that Apple does not allow third party app stores on iOS.

Apple, meanwhile, has argued that its App Store does not constitute a monopoly. iOS’ global market share of mobile devices is a little over 10% vs Google’s rival Android OS — which is running on the lion’s share of the world’s mobile hardware. But monopoly status depends on how a market is defined by regulators (and if you’re looking at the market for iOS apps then Apple has no competitors).

The iPhone maker also likes to point out that the vast majority of third party apps pay it no commission (as they don’t monetize via in-app payments). While it argues that restrictions on native apps are necessary to protect iOS users from threats to their security and privacy.

Last summer the European Commission said its App Store probe was focused on Apple’s mandatory requirement that app developers use its proprietary in-app purchase system, as well as restrictions applied on the ability of developers to inform iPhone and iPad users of alternative cheaper purchasing possibilities outside of apps.

It also said it was investigating Apple Pay: Looking at the T&Cs and other conditions Apple imposes for integrating its payment solution into others’ apps and websites on iPhones and iPads, and also on limitations it imposes on others’ access to the NFC (contactless payment) functionality on iPhones for payments in stores.

The EU’s antitrust regulator also said then that it was probing allegations of “refusals of access” to Apple Pay.

In March this year the UK also joined the Apple App Store antitrust investigation fray — announcing a formal investigation into whether it has a dominant position and if it imposes unfair or anti-competitive terms on developers using its app store.

US lawmakers have, meanwhile, also been dialling up attention on app stores, plural — and on competition in digital markets more generally — calling in both Apple and Google for questioning over how they operate their respective mobile app marketplaces in recent years.

Last month, for example, the two tech giants’ representatives were pressed on whether their app stores share data with their product development teams — with lawmakers digging into complaints against Apple especially that Cupertino frequently copies others’ apps, ‘sherlocking’ their businesses by releasing native copycats (as the practice has been nicknamed).

Back in July 2020 the House Antitrust Subcommittee took testimony from Apple CEO Tim Cook himself — and went on, in a hefty report on competition in digital markets, to accuse Apple of leveraging its control of iOS and the App Store to “create and enforce barriers to competition and discriminate against and exclude rivals while preferencing its own offerings”.

“Apple also uses its power to exploit app developers through misappropriation of competitively sensitive information and to charge app developers supra-competitive prices within the App Store,” the report went on. “Apple has maintained its dominance due to the presence of network effects, high barriers to entry, and high switching costs in the mobile operating system market.”

The report did not single Apple out — also blasting Google-owner Alphabet, Amazon and Facebook for abusing their market power. And the Justice Department went on to file suit against Google later the same month. So, over in the U.S., the stage is being set for further actions against big tech. Although what, if any, federal charges Apple could face remains to be seen.

At the same time, a number of state-level tech regulation efforts are brewing around big tech and antitrust — including a push in Arizona to relieve developers from Apple and Google’s hefty cut of app store profits.

While an antitrust bill introduced by Republican Josh Hawley earlier this month takes aim at acquisitions, proposing an outright block on big tech’s ability to carry out mergers and acquisitions.

Although that bill looks unlikely to succeed, a flurry of antitrust reform bills are set to introduced as U.S. lawmakers on both sides of the aisle grapple with how to cut big tech down to a competition-friendly size.

In Europe lawmakers are already putting down draft laws with the same overarching goal.

In the EU the Commission has proposed an ex ante regime to prevent big tech from abusing its market power, with the Digital Markets Act set to impose conditions on intermediating platforms who are considered ‘gatekeepers’ to others’ market access.

In the UK, which now sits outside the bloc, the government is also drafting new laws in response to tech giants’ market power — saying it will create a ‘pro-competition’ regime that will apply to platforms with so-called  ‘strategic market status’ — but instead of a set list of requirements it wants to target specific measures per platform.

Click Studios asks customers to stop tweeting about its Passwordstate data breach

By Zack Whittaker

Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.

Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.

In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.

But news of the breach only became public after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.

Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.

In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”

“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.

Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.

It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.

Click Studios chief executive Mark Sandford has not responded to repeated requests (from TechCrunch) for comment. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”

TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.

EU adopts rules on one-hour takedowns for terrorist content

By Natasha Lomas

The European Parliament approved a new law on terrorist content takedowns yesterday, paving the way for one-hour removals to become the legal standard across the EU.

The regulation “addressing the dissemination of terrorist content online” will come into force shortly after publication in the EU’s Official Journal — and start applying 12 months after that.

The incoming regime means providers serving users in the region must act on terrorist content removal notices from Member State authorities within one hour of receipt, or else provide an explanation why they have been unable to do so.

There are exceptions for educational, research, artistic and journalistic work — with lawmakers aiming to target terrorism propaganda being spread on online platforms like social media sites.

The types of content they want speedily removed under this regime includes material that incites, solicits or contributes to terrorist offences; provides instructions for such offences; or solicits people to participate in a terrorist group.

Material posted online that provides guidance on how to make and use explosives, firearms or other weapons for terrorist purposes is also in scope.

However concerns have been raised over the impact on online freedom of expression — including if platforms use content filters to shrink their risk, given the tight turnaround times required for removals.

The law does not put a general obligation on platforms to monitor or filter content but it does push service providers to prevent the spread of proscribed content — saying they must take steps to prevent propagation.

It is left up to service providers how exactly they do that, and while there’s no legal obligation to use automated tools it seems likely filters will be what larger providers reach for, with the risk of unjustified, speech chilling takedowns fast-following. 

Another concern is how exactly terrorist content is being defined under the law — with civil rights groups warning that authoritarian governments within Europe might seek to use it to go after critics based elsewhere in the region.

The law does include transparency obligations — meaning providers must publicly report information about content identification and takedown actions annually.

On the sanctions side, Member States are responsible for adopting rules on penalties but the regulation sets a top level of fines for repeatedly failing to comply with provisions at up to 4% of global annual turnover.

EU lawmakers proposed the new rules back in 2018  when concern was riding high over the spread of ISIS content online.

Platforms were pressed to abide by an informal one-hour takedown rule in March of the same year. But within months the Commission came with a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

Negotiations over the proposal have seen MEPs and Member States (via the Council) tweaking provisions — with the former, for example, pushing for a provision that requires competent authority to contact companies that have never received a removal order a little in advance of issuing the first order to remove content — to provide them with information on procedures and deadlines — so they’re not caught entirely on the hop.

The impact on smaller content providers has continued to be a concern for critics, though.

The Council adopted its final position in March. The approval by the Parliament yesterday concludes the co-legislative process.

Commenting in a statement, MEP Patryk JAKI, the rapporteur for the legislation, said: “Terrorists recruit, share propaganda and coordinate attacks on the internet. Today we have established effective mechanisms allowing member states to remove terrorist content within a maximum of one hour all around the European Union. I strongly believe that what we achieved is a good outcome, which balances security and freedom of speech and expression on the internet, protects legal content and access to information for every citizen in the EU, while fighting terrorism through cooperation and trust between states.”

EU-based digital assets platform Finoa inks $22M Series A funding led by Balderton Capital

By Mike Butcher

Institutions need to keep their crypto assets somewhere. And they aren’t going to keep it on some random, or consumer-grade crypto operation. This requires more sophisticated technology. Furthermore, being in the EU is going to be a key barrier to entry for many US or Asia-based operations.

Thus it is that Berlin-based digital asset custody and financial services platform
Finoa, has closed a $22 million Series A funding round, to do just that.

The round was led by Balderton Capital, alongside existing investors Coparion, Venture Stars and Signature Ventures, as well as an undisclosed investor.

Crucially, the Berlin-based startup works with Dapper Lab’s FLOW protocol, NEAR, and Mina, which are fast becoming standards for crypto assets. They are going up against large players such as Anchorage, Coinbase Custody, Bitgo, exchanges like Binance and Kraken, and self-custody solutions like Ledger.

Finoa says it now has over 250 customers, including T-Systems, DeFi-natives like CoinList and financial institutions like Bankhaus Scheich.

The company says its plan is to become a regulated platform for institutional investors and corporations to manage their digital assets and it has received a preliminary crypto custody license and is supervised by the German Federal Financial Supervisory Authority (BaFin).

The company was founded in 2018 by Christopher May and Henrik Ebbing, but both had previously worked together at McKinsey and started working in blockchain in 2017.

May commented: “We are proud to have established Finoa as Europe’s leading gateway for institutional participation and incredibly excited to accelerate our growth even further. We look forward to supporting new exciting protocols and projects, empowering innovative corporate use cases, and adding additional (decentralized) financial products and services to our platform.”

Colin Hanna, Principal at Balderton Capital, who leads most of Balderton’s Crypto investments, said: “Chris, Henrik, and the entire Finoa team have built a deeply impressive business which bridges the highest levels of professionalism with radical innovation. As custodians of digital asset private keys, Finoa needs to be trusted both with the secure management of those keys and with the products and services that allow their clients to fully leverage the power of native digital assets. The team they have assembled is uniquely positioned to do just that.” 

May added: “We identified a lack of sophisticated custody and asset servicing solutions for safeguarding and managing blockchain-based digital assets that successfully cover the needs of institutional investors. Finoa is bridging this gap by providing seamless, safe, and regulated access to the world of digital assets.”

“Being in the European Union requires a fundamentally different organizational setup, and poses a very high entry to new incumbents and other players overseas. There are few that have managed to do what Finoa has done in a European context and hence why we now see ourselves in a leading position.”

Vaccine Registries Are Good, Vaccine Apps Are Invasive

By Albert Fox Cahn, Mahima Arya
Registries are essential to reopening schools. Apps act as invasive bouncers that could block many communities out of essential spaces.

TikTok to open a ‘Transparency’ Center in Europe to take content and security questions

By Natasha Lomas

TikTok will open a center in Europe where outside experts will be shown information on how it approaches content moderation and recommendation, as well as platform security and user privacy, it announced today.

The European Transparency and Accountability Centre (TAC) follows the opening of a U.S. center last year — and is similarly being billed as part of its “commitment to transparency”.

Soon after announcing its U.S. TAC, TikTok also created a content advisory council in the market — and went on to replicate the advisory body structure in Europe this March, with a different mix of experts.

It’s now fully replicating the U.S. approach with a dedicated European TAC.

To-date, TikTok said more than 70 experts and policymakers have taken part in a virtual U.S. tour, where they’ve been able to learn operational details and pose questions about its safety and security practices.

The short-form video social media site has faced growing scrutiny over its content policies and ownership structure in recent years, as its popularity has surged.

Concerns in the U.S. have largely centered on the risk of censorship and the security of user data, given the platform is owned by a Chinese tech giant and subject to Internet data laws defined by the Chinese Communist Party.

While, in Europe, lawmakers, regulators and civil society have been raising a broader mix of concerns — including around issues of child safety and data privacy.

In one notable development earlier this year, the Italian data protection regulator made an emergency intervention after the death of a local girl who had reportedly been taking part in a content challenge on the platform. TikTok agreed to recheck the age of all users on its platform in Italy as a result.

TikTok said the European TAC will start operating virtually, owing to the ongoing COVID-19 pandemic. But the plan is to open a physical center in Ireland — where it bases its regional HQ — in 2022.

EU lawmakers have recently proposed a swathe of updates to digital legislation that look set to dial up emphasis on the accountability of AI systems — including content recommendation engines.

A draft AI regulation presented by the Commission last week also proposes an outright ban on subliminal uses of AI technology to manipulate people’s behavior in a way that could be harmful to them or others. So content recommender engines that, for example, nudge users into harming themselves by suggestively promoting pro-suicide content or risky challenges may fall under the prohibition. (The draft law suggests fines of up to 6% of global annual turnover for breaching prohibitions.)

It’s certainly interesting to note TikTok also specifies that its European TAC will offer detailed insight into its recommendation technology.

“The Centre will provide an opportunity for experts, academics and policymakers to see first-hand the work TikTok teams put into making the platform a positive and secure experience for the TikTok community,” the company writes in a press release, adding that visiting experts will also get insights into how it uses technology “to keep TikTok’s community safe”; how trained content review teams make decisions about content based on its Community Guidelines; and “the way human reviewers supplement moderation efforts using technology to help catch potential violations of our policies”.

Another component of the EU’s draft AI regulation sets a requirement for human oversight of high risk applications of artificial intelligence. Although it’s not clear whether a social media platform would fall under that specific obligation, given the current set of categories in the draft regulation.

However the AI regulation is just one piece of the Commission’s platform-focused rule-making.

Late last year it also proposed broader updates to rules for digital services, under the DSA and DMA, which will place due diligence obligations on platforms — and also require larger platforms to explain any algorithmic rankings and hierarchies they generate. And TikTok is very likely to fall under that requirement.

The UK — which is now outside the bloc, post-Brexit — is also working on its own Online Safety regulation, due to present this year. So in the coming years there will be multiple content-focused regulatory regimes for platforms like TikTok to comply with in Europe. And opening your algorithms to outside experts may be hard requirement, not soft PR.

Commenting on the launch of its European TAC in a statement, Cormac Keenan, TikTok’s head of trust and safety, said: With more than 100 million users across Europe, we recognise our responsibility to gain the trust of our community and the broader public. Our Transparency and Accountability Centre is the next step in our journey to help people better understand the teams, processes, and technology we have to help keep TikTok a place for joy, creativity, and fun. We know there’s lots more to do and we’re excited about proactively addressing the challenges that lie ahead. I’m looking forward to welcoming experts from around Europe and hearing their candid feedback on ways we can further improve our systems.”

 

The Plane Paradox: More Automation Should Mean More Training

By Shem Malmquist, Roger Rapoport
Today's highly automated planes create surprises pilots aren't familiar with. The humans in the cockpit need to be better prepared for the machine's quirks.

The SEC should do more to make startup equity compensation transparent

By Danny Crichton
Yifat Aran Contributor
Dr. Yifat Aran is a visiting scholar at the Technion, Israel Institute of Technology, and an incoming Assistant Professor in Haifa University Faculty of Law. She earned her JSD from Stanford Law School where her dissertation focused on equity-based compensation in Silicon Valley startups.

Imagine that you get a job offer at your dream company. You start to negotiate the contract and everything sounds great except for one detail — your future employer refuses to say in what currency your salary would be paid. It could be U.S. dollars, euros, or perhaps Japanese yen, and you are expected to take a leap of faith and hope for fair pay. It sounds absurd, but this is exactly how the startup equity compensation market currently operates.

The typical scenario is that employers offer a number of stock options or restricted stock units (RSUs) as part of an offer letter, but do not mention the company’s total number of shares. Without this piece of information, employees cannot know whether their grants represent a 0.1% ownership stake, 0.01%, or any other percentage. Employees can ask for this information, but the employer is not required to provide it, and many startups simply don’t.

But that’s not the end of it. Due to lack of proper disclosure requirements, employees are completely oblivious to the most salient form of startup valuation information — data describing the firm’s capitalization table and aggregate liquidation preferences (which determine, in case the company is sold, how much money will be paid to investors before employees receive any payout). By not accounting for the debt-like properties of venture capital financing, employees tend to overestimate the value of their equity grants. This is especially relevant to employees of unicorn companies because the type of terms that are common in late-stage financing have a dramatic and often misleading impact on the value of the company’s common stock.

What have regulators done to fix this? Not much. Under the current regulation, the vast majority of startups are exempted from providing any information to their employees other than a copy of the options plan itself. A small percentage of startups that issue their employees more than $10 million worth of securities over a year period are required to provide additional disclosures including updated financial statements (two years of consolidated balance sheets, income statements, cash flows, and changes in stockholders’ equity). These disclosures are likely to contain sensitive information about the startup but are only remotely related to the question of valuation that employees want answered. The company’s most recent fair market valuation and the description of the employee’s anticipated payout across various exit scenarios would convey far more useful information.

The problem with the current regulation is not merely that it provides employees with either too much or too little information—it is both and more. As the lyrics of Johnny Mathis and Deniece Williams’ song go, it is “too much, too little, too late.” The regulation mandates the disclosure of too much irrelevant and potentially harmful information, too little material information, and the disclosure is delivered in a timeframe that does not permit efficient decision-making by employees (only after the employee has joined the company).

This situation is unhealthy not only for employees themselves but also for the high-tech labor market as a whole. Talent is a scarce resource that companies of all sizes depend on. Lack of information impedes competition and slows down the flow of employees to better, more promising, opportunities. In the long run, employees’ informational disadvantage can erode the value of equity incentives and make it all the more difficult for startups to compete for talent.

In an article I published in the Columbia Business Law Review, titled, “Making Disclosure Work for Startup Employees,” I argue that these problems have a relatively easy fix. Startups that issues over 10% of any class of shares to at least 100 employees should be required to disclose employees’ individual payout according an exit waterfall analysis.

Waterfall analysis describes the breakdown of cash flow distribution arrangements. In the case of startup finance, this analysis assumes that the company’s equity is sold and the proceeds are allocated in a “waterfall” down the different equity classes of shares, according to their respective liquidation preferences, until the common stockholders finally receive the residual claim, if any exists. While the information the model contains can be extremely complicated, the output is not. A waterfall model can render a graph where for each possible “exit valuation” plotted on the x-axis, the employee’s individualized “payout” is indicated on the y-axis. With the help of a cap table management platform, it is as simple as pressing a few mouse clicks.

This visual representation will allow employees to understand how much they stand to gain across a range of exit values even if they don’t understand the math and legal jargon that operate in the background. Armed with this information, employees would not need the traditional forms of disclosures now mandated by Rule 701, and startups could be relieved of the risk that the information contained in their financial statements would fall into the wrong hands. Critically, I also argue that employees should receive this information as part of the offer letter – before they choose whether to accept a job opportunity that includes an equity compensation component. 

Earlier this year, the SEC released proposed revisions to Rule 701. The proposal includes many developments – among them the introduction of an alternative to the disclosure of financial statements. For startups that hit the threshold of issuing employees over $10 million worth of securities, the proposal allows choosing between disclosing financial statements and providing an independent valuation report of the securities’ fair market value. According to the proposal, the latter should be determined by an independent appraisal consistent with the rules and regulations under Internal Revenue Code Section 409A.

This is a step in the right direction — fair market valuation is far more useful to employees than the firm’s financial statements. However, the disclosure of a 409A valuation in and of itself is just not enough. It is a well-known secret in Silicon Valley that 409A valuations are highly inaccurate. Because the appraisal firm wishes to maintain a long-lasting business relationship with the company, and given that the valuation is based on information provided by the management team and is subject to board approval, the startup maintains nearly full control over the result. Therefore, the company’s 409A valuation has informational value only when it includes the waterfall analysis that was used to generate the outcome. Moreover, the SEC’s proposal still allows the vast majority of startups (as long as they avoid the $10 million threshold) to offer equity grants without providing any meaningful disclosures.  

For over 30 years, the SEC has almost completely deregulated startup equity compensation in order to accommodate the ever growing need of startups to rely on equity in the war for talent. However, the SEC has and still is paying little attention to the other side of the employment equation—employees’ need for information regarding the value of their equity compensation. The time is ripe to revisit the protection of employees in their investor capacity under the securities regulatory regime.

Solving the security challenges of public cloud

By Ram Iyer
Nick Lippis Contributor
Nick Lippis is an authority on advanced IP networks and their benefits to business objectives. He is the co-founder and co-chair of ONUG, which sponsors biannual meetings of nearly 1,000 IT business leaders of large enterprises.

Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.

Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.

Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.

Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.

Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.

Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.

The root of the problem

A few key challenges are sparking the increased number of security alerts in the public cloud:

  1. Rapid digital transformation sparked by COVID-19.
  2. An expanded network edge created by the modern, work-from-home environment.
  3. An increase in the type of security attacks.

The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.

Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.

In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.

Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.

Without a unified framework in place, the volume of incidents will spiral out of control.

Where CSNF comes into play

CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.

But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.

Here’s a closer look at the benefits a standardized approach can create for all parties:

  • End users: CSNF can streamline operations for enterprise cloud consumers, like IT teams, and allows improved visibility and greater control over the security posture of their data. This enhanced sense of protection from improved cloud governance benefits all individuals.
  • Cloud providers: CSNF can eliminate the barrier to entry currently prohibiting an enterprise consumer from using additional services from a specific cloud provider by freeing up added security resources. Also, improved end-user cloud governance encourages more cloud consumption from businesses, increasing provider revenue and providing confidence that their data will be secure.
  • Cloud vendors: Cloud vendors that provide SaaS solutions are spending more on engineering resources to deal with increased security notifications. But with a standardized framework in place, these additional resources would no longer be necessary. Instead of spending money on such specific needs along with labor, vendors could refocus core staff on improving operations and products such as user dashboards and applications.

Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.

What’s next?

CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.

The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.

Since colleges are failing to prepare students for tech jobs, it’s time to bring back apprenticeships

By Annie Siebert
Sophie Ruddock Contributor
Sophie Ruddock is VP, GM North America at Multiverse, a tech startup focused on high-quality education and training through professional apprenticeships.
Ryan Craig Contributor
Ryan Craig is managing director of Achieve Partners, an investment firm focused at the intersection of education and employment, and author of “A New U: Faster + Cheaper Alternatives to College.”

You can always tell a system is broken when you change the inputs and the outcomes don’t improve. Any software engineer will tell you that.

Using this metric, it’s clear the United States’ antiquated higher education system is truly broken. Overpriced and underperforming, the system is failing on two key fronts: addressing racial inequalities and closing our country’s growing tech skills gaps.

For all the changes made to the system to welcome people of color into the classroom, the outcomes in terms of wealth, equity distribution and representation are worse than ever.

On average, Black college graduates owe $25,000 more in student debt than their white peers. Worse still, four years after throwing their caps in the air, 48% of Black graduates owe an average of 12.5% more than they borrowed in the first place.

A labor market built on degree requirements has little hope of correcting course.

While colleges and universities do as good a job as they’ve ever done at preparing students with the cognitive and critical thinking skills they’ll need to be successful in the long run, the college system just isn’t providing the right training for jobs in 2021.

Looking past the college experience, the unemployment rate for Black Americans stands at nearly 10%, compared with 5.5% for white Americans, while the typical Black American family has eight times less wealth than a white family. This is coupled with the fact that Black people make up just 4.1% of Russell 3000 board members — compared to 13.4% of the population.

This isn’t just a matter of grave injustice. The racial wealth gap costs the U.S. economy $1 trillion to $1.5 trillion in GDP output each year. There is a financial and moral imperative to do something about it.

Then there’s the skills gaps: For all the belated changes made to academic programs and curricula, and while colleges and universities do as good a job as they’ve ever done at preparing students with the cognitive and critical thinking skills they’ll need to be successful in the long run, the college system just isn’t providing the right training for jobs in 2021. Ten years ago, 56% of CEOs were “extremely” or “somewhat” concerned by the lack of talent for digital roles. By 2019, this had jumped to 79%. This is why well over 50% of new and recent graduates are underemployed in their first jobs out of college (two-thirds of whom will be underemployed five years later, and half a decade later).

There must be a better way. A way that empowers young people to achieve in-demand skills while avoiding the decades-long burden of student loans. A way that doesn’t discriminate based on socioeconomic background while exposing talent-hungry employers to a new pool of qualified, driven individuals.

In the explosion of edtech businesses with new approaches, we are in danger of overlooking an established model that can be adapted to solve these challenges. That model is apprenticeships.

The apprenticeship movement

There’s a lingering perception in America that apprenticeships are the province of construction and building trades, or even medieval guilds like smithing and glass-blowing. Well, not anymore. While we’ve been focused on edtech, or despairing over the widening skills gap, apprenticeships have been rebooted. Modern, tech-driven apprenticeships are emerging as a faster, cheaper and more impactful alternative to higher education.

In Europe, tech companies — and nontech companies increasingly hiring entry-level workers with discrete tech skills — are already leveraging apprenticeships to provide a direct route into the labor market for diverse talent. From software engineers to data analysts, the apprentice of the 21st century is as likely to wield a keyboard as a wrench.

Fully employed from day one, apprentices earn a wage while they learn on a program that is entirely free to the individual. Their training is delivered alongside their role, with this applied learning approach ensuring relevant skills are tested and embedded right away.

Part of the challenge presented by the existing system is that college provides a single shot of learning at the start of a career, with a focus on knowledge rather than skills. Instead of time-consuming traditional education models, we should be encouraging companies to focus on training individuals for highly skilled jobs and adapting training as roles shift through a lifelong learning journey.

Against a college system churning out graduates armed with knowledge of limited applicability in the workplace, apprentices have real-work experience and transferable skill sets in the tech and digital spaces.

As we write this, tech apprenticeships represent less than 1% of American apprenticeships , while 78% of apprentices are white. But change is in the air. In recent weeks, the Biden administration has gone out of its way to highlight tech as a growth area for apprenticeships.

The president also announced his commitment to raising apprenticeship standards, starting with casting off industry-recognized apprenticeship programs lacking in quality and training rigor.

These aren’t just words, either. The apprenticeship reboot will be powered by a new National Apprenticeship Act . This proposed legislation commits $3 billion over the next five years to expanding registered apprenticeship programs across a range of industries. If it’s done right, tech will be front and center.

The benefit to businesses

All this is welcome good news for businesses desperate to close skills gaps. As roles evolve at an ever-faster pace, it’s becoming more and more difficult to know what a college degree actually says about an individual’s ability. Yes, they went to a “good” school. But when half of Americans say their degree is irrelevant to their current role, how does prestige translate to jobs, let alone ability to perform in the workplace?

Increasingly operating in the dark, tech businesses and nontech managers hiring for tech roles are competing with each other to poach experienced talent into senior roles. It’s continuing to fish in a very limited, homogeneous pool and an expensive short-term solution.

Professional apprenticeships allow business leaders to be more strategic and proactive in their hiring practices. They can mold apprentices to the roles they actually need to fill while focusing on their organization’s specific requirements. It beats relying on uniform, outdated education models.

Better still, by training apprentices from the start of their career, companies inspire loyalty and eliminate the tricky transition phase recent grads and external hires usually need. Once converted to full-time employees, apprentices tend to persist for twice as long as traditional direct hires.

While skills gaps are created by the future racing toward us, racial inequalities are rooted in our past. Professional apprenticeships help break down entrenched structural barriers to careers in industries like tech.

Most important, they look beyond the degree requirements that screen out 67% of Black and 79% of Hispanic Americans. Because apprenticeships are paid pathways to economic opportunity, they truly level the playing field and allow companies to make genuine advances toward racial equality — beyond a few neatly crafted Instagram posts. Meanwhile, by tapping into diverse talent pools early, businesses can develop individuals and build real, recognizable routes to the boardroom.

They would be right, too. A 2020 report by McKinsey found companies with the highest diversity earned 35% more than their industry average. Similarly, the share returns of the most diverse companies in the S&P 500 outperformed the least diverse by a staggering 240%.

The time for change is now. According to the National Center for Education Statistics, 41% of grads end up in roles that don’t require a degree. With COVID-19 hitting young workers particularly hard, this figure is set to rise unless we embrace new approaches, including professional apprenticeships. In creating a direct and meaningful career pathway for young adults, they can help businesses close skills gaps and hit their much-vaunted diversity targets.

There’s no single solution to these challenges. But the professional apprenticeship can be education’s biggest contribution.

 

EU’s top data protection supervisor urges ban on facial recognition in public

By Natasha Lomas

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

Fraud prevention platform Sift raises $50M at over $1B valuation, eyes acquisitions

By Mary Ann Azevedo

With the increase of digital transacting over the past year, cybercriminals have been having a field day.

In 2020, complaints of suspected internet crime surged by 61%, to 791,790, according to the FBI’s 2020 Internet Crime Report. Those crimes — ranging from personal and corporate data breaches to credit card fraud, phishing and identity theft — cost victims more than $4.2 billion.

For companies like Sift — which aims to predict and prevent fraud online even more quickly than cybercriminals adopt new tactics — that increase in crime also led to an increase in business.

Last year, the San Francisco-based company assessed risk on more than $250 billion in transactions, double from what it did in 2019. The company has over several hundred customers, including Twitter, Airbnb, Twilio, DoorDash, Wayfair and McDonald’s, as well a global data network of 70 billion events per month.

To meet the surge in demand, Sift said today it has raised $50 million in a funding round that values the company at over $1 billion. Insight Partners led the financing, which included participation from Union Square Ventures and Stripes.

While the company would not reveal hard revenue figures, President and CEO Marc Olesen said that business has tripled since he joined the company in June 2018. Sift was founded out of Y Combinator in 2011, and has raised a total of $157 million over its lifetime.

The company’s “Digital Trust & Safety” platform aims to help merchants not only fight all types of internet fraud and abuse, but to also “reduce friction” for legitimate customers. There’s a fine line apparently between looking out for a merchant and upsetting a customer who is legitimately trying to conduct a transaction.

Sift uses machine learning and artificial intelligence to automatically surmise whether an attempted transaction or interaction with a business online is authentic or potentially problematic.

Image Credits:

One of the things the company has discovered is that fraudsters are often not working alone.

“Fraud vectors are no longer siloed. They are highly innovative and often working in concert,” Olesen said. “We’ve uncovered a number of fraud rings.”

Olesen shared a couple of examples of how the company thwarted fraud incidents last year. One recently involved money laundering through donation sites where fraudsters tested stolen debit and credit cards through fake donation sites at guest checkout.

“By making small donations to themselves, they laundered that money and at the same tested the validity of the stolen cards so they could use it on another site with significantly higher purchases,” he said. 

In another case, the company uncovered fraudsters using Telegram, a social media site, to make services available, such as food delivery, with stolen credentials.

The data that Sift has accumulated since its inception helps the company “act as the central nervous system for fraud teams.” Sift says that its models become more intelligent with every customer that it integrates.

Insight Partners Managing Director Jeff Lieberman, who is a Sift board member, said his firm initially invested in Sift in 2016 because even at that time, it was clear that online fraud was “rapidly growing.” It was growing not just in dollar amounts, he said, but in the number of methods cybercriminals used to steal from consumers and businesses.

Sift has a novel approach to fighting fraud that combines massive data sets with machine learning, and it has a track record of proving its value for hundreds of online businesses,” he wrote via email.

When Olesen and the Sift team started the recent process of fundraising, Index actually approached them before they started talking to outside investors “because both the product and business fundamentals are so strong, and the growth opportunity is massive,” Lieberman added.

“With more businesses heavily investing in online channels, nearly every one of them needs a solution that can intelligently weed out fraud while ensuring a seamless experience for the 99% of transactions or actions that are legitimate,” he wrote. 

The company plans to use its new capital primarily to expand its product portfolio and to scale its product, engineering and sales teams.

Sift also recently tapped Eu-Gene Sung — who has worked in financial leadership roles at Integral Ad Science, BSE Global and McCann — to serve as its CFO.

As to whether or not that meant an IPO is in Sift’s future, Olesen said that Sung’s experience of taking companies through a growth phase such as what Sift is experiencing would be valuable. The company is also for the first time looking to potentially do some M&A.

“When we think about expanding our portfolio, it’s really a buy/build partner approach,” Olesen said.

To ensure inclusivity, the Biden administration must double down on AI development initiatives

By Ram Iyer
Miriam Vogel Contributor
Miriam Vogel is the president and CEO of EqualAI, a nonprofit organization focused on reducing unconscious bias in artificial intelligence.
More posts by this contributor

The National Security Commission on Artificial Intelligence (NSCAI) issued a report last month delivering an uncomfortable public message: America is not prepared to defend or compete in the AI era. It leads to two key questions that demand our immediate response: Will the U.S. continue to be a global superpower if it falls behind in AI development and deployment? And what can we do to change this trajectory?

Left unchecked, seemingly neutral artificial intelligence (AI) tools can and will perpetuate inequalities and, in effect, automate discrimination. Tech-enabled harms have already surfaced in credit decisions, health care services, and advertising.

To prevent this recurrence and growth at scale, the Biden administration must clarify current laws pertaining to AI and machine learning models — both in terms of how we will evaluate use by private actors and how we will govern AI usage within our government systems.

The administration has put a strong foot forward, from key appointments in the tech space to issuing an Executive Order on the first day in office that established an Equitable Data Working Group. This has comforted skeptics concerned both about the U.S. commitment to AI development and to ensuring equity in the digital space.

But that will be fleeting unless the administration shows strong resolve in making AI funding a reality and establishing leaders and structures necessary to safeguard its development and use.

Need for clarity on priorities

There has been a seismic shift at the federal level in AI policy and in stated commitments to equality in tech. A number of high profile appointments by the Biden administration — from Dr. Alondra Nelson as Deputy of OSTP, to Tim Wu at the NEC, to (our former senior advisor) Kurt Campbell at the NSC — signal that significant attention will be paid to inclusive AI development by experts on the inside.

The NSCAI final report includes recommendations that could prove critical to enabling better foundations for inclusive AI development, such as creating new talent pipelines through a U.S. Digital Service Academy to train current and future employees.

The report also recommends establishing a new Technology Competitiveness Council led by the Vice President. This could prove essential in ensuring that the nation’s commitment to AI leadership remains a priority at the highest levels. It makes good sense to have the administration’s leadership on AI spearheaded by VP Harris in light of her strategic partnership with the President, her tech policy savvy and her focus on civil rights.

The U.S. needs to lead by example

We know AI is powerful in its ability to create efficiencies, such as plowing through thousands of resumes to identify potentially suitable candidates. But it can also scale discrimination, such as the Amazon hiring tool that prioritized male candidates or “digital redlining” of credit based on race.

The Biden administration should issue an Executive Order (EO) to agencies inviting ideation on ways AI can improve government operations. The EO should also mandate checks on AI used by the USG to ensure it’s not spreading discriminatory outcomes unintentionally.

For instance, there must be a routine schedule in place where AI systems are evaluated to ensure embedded, harmful biases are not resulting in recommendations that are discriminatory or inconsistent with our democratic, inclusive values — and reevaluated routinely given that AI is constantly iterating and learning new patterns.

Putting a responsible AI governance system in place is particularly critical in the U.S. Government, which is required to offer due process protection when denying certain benefits. For instance, when AI is used to determine allocation of Medicaid benefits, and such benefits are modified or denied based on an algorithm, the government must be able to explain that outcome, aptly termed technological due process.

If decisions are delegated to automated systems without explainability, guidelines and human oversight, we find ourselves in the untenable situation where this basic constitutional right is being denied.

Likewise, the administration has immense power to ensure that AI safeguards by key corporate players are in place through its procurement power. Federal contract spending was expected to exceed $600 billion in fiscal 2020, even before including pandemic economic stimulus funds. The USG could effectuate tremendous impact by issuing a checklist for federal procurement of AI systems — this would ensure the government’s process is both rigorous and universally applied, including relevant civil rights considerations.

Protection from discrimination stemming from AI systems

The government holds another powerful lever to protect us from AI harms: its investigative and prosecutorial authority. An Executive Order instructing agencies to clarify applicability of current laws and regulations (e.g., ADA, Fair Housing, Fair Lending, Civil Rights Act, etc.) when determinations are reliant on AI-powered systems could result in a global reckoning. Companies operating in the U.S. would have unquestionable motivation to check their AI systems for harms against protected classes.

Low-income individuals are disproportionately vulnerable to many of the negative effects of AI. This is especially apparent with regard to credit and loan creation, because they are less likely to have access to traditional financial products or the ability to obtain high scores based on traditional frameworks. This then becomes the data used to create AI systems that automate such decisions.

The Consumer Finance Protection Bureau (CFPB) can play a pivotal role in holding financial institutions accountable for discriminatory lending processes that result from reliance on discriminatory AI systems. The mandate of an EO would be a forcing function for statements on how AI-enabled systems will be evaluated, putting companies on notice and better protecting the public with clear expectations on AI use.

There is a clear path to liability when an individual acts in a discriminatory way and a due process violation when a public benefit is denied arbitrarily, without explanation. Theoretically, these liabilities and rights would transfer with ease when an AI system is involved, but a review of agency action and legal precedent (or rather, the lack thereof) indicates otherwise.

The administration is off to a good start, such as rolling back a proposed HUD rule that would have made legal challenges against discriminatory AI essentially unattainable. Next, federal agencies with investigative or prosecutorial authority should clarify which AI practices would fall under their review and current laws would be applicable — for instance, HUD for illegal housing discrimination; CFPB on AI used in credit lending; and the Department of Labor on AI used in determinations made in hiring, evaluations and terminations.

Such action would have the added benefit of establishing a useful precedent for plaintiff actions in complaints.

The Biden administration has taken encouraging first steps signaling its intent to ensure inclusive, less discriminatory AI. However, it must put its own house in order by directing that federal agencies require the development, acquisition and use of AI — internally and by those it does business with — is done in a manner that protects privacy, civil rights, civil liberties, and American values.

Bosch sees a place for renewable fuels, challenging proposed European Union engine ban

By Aria Alamalhodaei

Bosch executives on Thursday criticized proposed EU regulations that would ban the internal combustion engine by 2025, saying that lawmakers “shy away” from discussing the consequences of such a ban on employment.

Although the company reported it is creating jobs through its new businesses, particularly its fuel cell business, and said it was filling more than 90% of these positions internally, it also said an all- or mostly-electric transportation revolution would likely affect jobs. As a case in point, the company told reporters that ten Bosch employees are needed to build a diesel powertrain system, three for a gasoline system — but only one for an electrical powertrain.

Instead, Bosch sees a place for renewable synthetic fuels and hydrogen fuel cells alongside electrification. Renewable synthetic fuels made from hydrogen are a different technology from hydrogen fuel cells. Fuel cells use hydrogen to generate electricity, while hydrogen-derived fuels can be combusted in a modified internal combustion engine (ICE).

“An opportunity is being missed if renewable synthetic fuel derived from hydrogen and CO2 remains off-limits in road transport,” Bosch CEO Volkmar Denner said.

“Climate action is not about the end of the internal-combustion engine,” he continued. “It’s about the end of fossil fuels. And while electromobility and green charging power make road transport carbon neutral, so do renewable fuels.”

Electric solutions have limits, Denner said, particularly in powering heavy-duty vehicles. The company earlier this month established a joint venture with Chinese automaker Qingling Motors to build fuel cell powertrains in a test fleet of 70 trucks.

Bosch’s confidence in hydrogen fuel cells and synthetic fuels isn’t to the exclusion of battery-electric mobility. The company, which is one of the world’s largest suppliers of automotive and industrial components, said its electromobility business is growing by almost 40 percent, and the company projects annual sales of electrical powertrain components to increase to around €5 billion ($6 billion) by 2025, a fivefold increase.

However, the German company said it was “keeping its options open” by also investing €600 million ($721.7 million) in fuel cell powertrains in the next three years.

“Ultimately Europe won’t be able to achieve climate neutrality without a hydrogen economy,” Denner said.

Bosch has not been immune from the effects of the global semiconductor shortage, which continues to drag into 2021. Board member Stefan Asenkerschbaumer warned that there is a risk the shortage “will stifle the recovery that was forecast” for this year. Taiwan Semiconductor Manufacturing Company executives told investors earlier this month that the situation may persist into 2022.

Hackers Used to Be Humans. Soon, AIs Will Hack Humanity

By BRUCE SCHNEIER
Like crafty genies, AIs will grant our wishes, and then hack them, exploiting our social, political, and economic systems like never before.

Nations Need Ambassadors to Big Tech

By Alexis Wichowski
Governments see that companies have country-like powers, but they can’t figure out how to deal with their un-country-like structures. Diplomats could help.

The Pandemic Proved That Our Toilets Are Crap

By Chelsea Wald
The core technologies for sewage systems were developed over a hundred years ago. It's time to get better, healthier updates in the pipeline.

Everyone On Facebook’s Oversight Board Should Resign

By Jessica J. González, Carmen Scurato
The committee's coming decision on banning Donald Trump from the platform is meaningless. Its existence only gets in the way of actually fixing Facebook.

Soona raises $10.2M to make remote photo and video shoots easy

By Anthony Ha

Soona, a startup aiming to satisfy the growing content needs of the e-commerce ecosystem, is announcing that it has raised $10.2 million in Series A funding led by Union Square Ventures.

When I wrote about Soona in 2019, the model focused on staging shoots that can deliver videos and photos in 24 hours or less. The startup still operates studios in Austin, Denver and Minneapolis, but co-founder and CEO Liz Giorgi told me that during the pandemic, Soona shifted to a fully virtual/remote model — customers ship their products to Soona, then then watch the shoot remotely and offer immediate feedback, and only pay for the photos ($39 each) and video clips ($93 each) that they actually want.

In some cases, the studio isn’t even necessary — Giorgi said that 30% of Soona’s photographers and crew members are working from home.

Soona has now worked with more than 4,000 customers, including Lola Tampons, The Sill, and Wild Earth, with revenue growing 400% last year. Giorgi said that even as larger in-person shoots become possible again, this approach still makes sense for many clients.

“There’s nothing we sell online that does not require a visual, but not every single visual requires a massive full day shoot,” she said.

Soona

Image Credits: Soona

Giorgi also suggested that Soona’s approach has unlocked a “new level of scalability,” adding, “Internally at Soona, we really believe in the remote shoot experience. It’s not only more efficient, it’s a lot more fun not having to fly a brand manager from Miami and have them spend a full day at a warehouse in New York. That’s not only cost-prohibitive, it’s also a time-consuming and exhausting process for everyone.”

The new funding follows a $1.2 million seed round. Giorgi said the Series A will allow Soona to develop a subscription product with more collaboration tools and more data about what kinds of visual content is most effective.

“There’s an opportunity to own the visual ecosystem of e-commerce from beginning to end,” she said.

Giorgi also noted that Soona continues to employ its “candor clause” requiring investors to disclose whether they’ve ever faced complaints of sexual harassment or discrimination. In fact, the clause has been expanded to cover complaints around racism, ableism or anti-LGBTQ discrimination.

“In some ways it’s a gate that prevents bad actors from being involved […] but it really drives a deeper connection with the investor and the founder,” Giorgi said. “We can have conversation about our values and how we see the world. We get to have a conversation about equality and justice at at time when we’re talking a lot about equity and the cap table.”

❌