FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Google’s Gradient Ventures leads $8.2M Series A for Vault Platform’s misconduct reporting SaaS

By Natasha Lomas

Fixing workplace misconduct reporting is a mission that’s snagged London-based Vault Platform backing from Google’s AI focused fund, Gradient Ventures, which is the lead investor in an $8.2 million Series A that’s being announced today.

Other investors joining the round are Illuminate Financial, along with existing investors including Kindred Capital and Angular Ventures. Its $4.2M seed round was closed back in 2019.

Vault sells a suite of SaaS tools to enterprise-sized or large/scale-up companies to support them to pro-actively manage internal ethics and integrity issues. As well as tools for staff to report issues, data and analytics is baked into the platform — so it can support with customers’ wider audit and compliance requirements.

In an interview with TechCrunch, co-founder and CEO Neta Meidav said that as well as being wholly on board with the overarching mission to upgrade legacy reporting tools like hotlines provided to staff to try to surface conduct-related workplace risks (be that bullying and harassment; racism and sexism; or bribery, corruption and fraud), as you might expect Gradient Ventures was interested in the potential for applying AI to further enhance Vault’s SaaS-based reporting tool.

A feature of its current platform, called ‘GoTogether’, consists of an escrow system that allows users to submit misconduct reports to the relevant internal bodies but only if they are not the first or only person to have made a report about the same person — the idea being that can help encourage staff (or outsiders, where open reporting is enabled) to report concerns they may otherwise hesitate to, for various reasons.

Vault now wants to expand the feature’s capabilities so it can be used to proactively surface problematic conduct that may not just relate to a particular individual but may even affect a whole team or division — by using natural language processing to help spot patterns and potential linkages in the kind of activity being reported.

“Our algorithms today match on an alleged perpetrator’s identity. However many events that people might report on are not related to a specific person — they can be more descriptive,” explains Meidav. “For example if you are experiencing some irregularities in accounting in your department, for example, and you’re suspecting that there is some sort of corruption or fraudulent activity happening.”

“If you think about the greatest [workplace misconduct] disasters and crises that happened in recent years — the Dieselgate story at Volkswagen, what happened in Boeing — the common denominator in all these cases is that there’s been some sort of a serious ethical breach or failure which was observed by several people within the organization in remote parts of the organization. And the dots weren’t connected,” she goes on. “So the capacity we’re currently building and increasing — building upon what we already have with GoTogether — is the ability to connect on these repeated events and be able to connect and understand and read the human input. And connect the dots when repeated events are happening — alerting companies’ boards that there is a certain ‘hot pocket’ that they need to go and investigate.

“That would save companies from great risk, great cost, and essentially could prevent huge loss. Not only financial but reputational, sometimes it’s even loss to human lives… That’s where we’re getting to and what we’re aiming to achieve.”

There is the question of how defensible Vault’s GoTogether feature is — how easily it could be copied — given you can’t patent an idea. So baking in AI smarts may be a way to layer added sophistication to try to maintain a competitive edge.

“There’s some very sophisticated, unique technology there in the backend so we are continuing to invest in this side of our technology. And Gradient’s investment and the specific we’re receiving from Google now will only increase that element and that side of our business,” says Meidav when we ask about defensibility.

Commenting on the funding in a statement, Gradient Ventures founder and managing partner, Anna Patterson, added: “Vault tackles an important space with an innovative and timely solution. Vault’s application provides organizations with a data-driven approach to tackling challenges like occupational fraud, bribery or corruption incidents, safety failures and misconduct. Given their impressive team, technology, and customer traction, they are poised to improve the modern workplace.”

The London-based startup was only founded in 2018 — and while it’s most keen to talk about disrupting legacy hotline systems, which offer only a linear and passive conduit for misconduct reporting, there are a number of other startups playing in the same space. Examples include the likes of LA-based AllVoices, YC-backed WhispliHootsworth and Spot to name a few.

Competition seems likely to continue to increase as regulatory requirements around workplace reporting keep stepping up.

The incoming EU Whistleblower Protection Directive is one piece of regulation Vault expects will increase demand for smarter compliance solutions — aka “TrustTech”, as it seeks to badge it — as it will require companies of more than 250 employees to have a reporting solution in place by the end of December 2021, encouraging European businesses to cast around for tools to help shrink their misconduct-related risk.

She also suggests a platform solution can help bridge gaps between different internal teams that may need to be involved in addressing complaints, as well as helping to speed up internal investigations by offering the ability to chat anonymously with the original reporter.

Meidav also flags the rising attention US regulators are giving to workplace misconduct reporting — noting some recent massive awards by the SEC to external whistleblowers, such as the $28M paid out to a single whistleblower earlier this year (in relation to the Panasonic Avionics consultant corruption case).

She also argues that growing numbers of companies going public (such as via the SPAC trend, where there will have been reduced regulatory scrutiny ahead of the ‘blank check’ IPO) raises reporting requirements generally — meaning, again, more companies will need to have in place a system operated by a third party which allows anonymous and non-anonymous reporting. (And, well, we can only speculate whether companies going public by SPAC may be in greater need of misconduct reporting services vs companies that choose to take a more traditional and scrutinized route to market… )

“Just a few years back I had to convince investors that this category it really is a category — and fast forward to 2021, congratulations! We have a market here. It’s a growing category and there is competition in this space,” says Meidav.

“What truly differentiates Vault is that we did not just focus on digitizing an old legacy process. We focused on leveraging technology to truly empower more misconduct to surface internally and for employees to speak up in ways that weren’t available for them before. GoTogether is truly unique as well as the things that we’re doing on the operational side for a company — such as collaboration.”

She gives an example of how a customer in the oil and gas sector configured the platform to make use of an anonymous chat feature in Vault’s app so they could provide employees with a secure direct-line to company leadership.

“They’ve utilizing the anonymous chat that the app enables for people to have a direct line to leadership,” she says. “That’s incredible. That is such a progress, forward looking way to be utilizing this tool.”

Vault Platform’s suite of tools include an employee app and a Resolution Hub for compliance, HR, risk and legal teams (Image credits: Vault Platform)

Meidav says Vault has around 30 customers at this stage, split between the US and EU — its core regions of focus.

And while its platform is geared towards enterprises, its early customer base includes a fair number of scale-ups — with familiar names like Lemonade, Airbnb, Kavak, G2 and OVO Energy on the list.

Scale ups may be natural customers for this sort of product given the huge pressures that can be brought to bear upon company culture as a startup switches to expanding headcount very rapidly, per Meidav.

“They are the early adopters and they are also very much sensitive to events such as these kind of [workplace] scandals as it can impact them greatly… as well as the fact that when a company goes through a hyper growth — and usually you see hyper growth happening in tech companies more than in any other type of sector — hyper growth is at time when you really, as management, as leadership, it’s really important to safeguard your culture,” she suggests.

“Because it changes very, very quickly and these changes can lead to all sorts of things — and it’s really important that leadership is on top of it. So when a company goes through hyper growth it’s an excellent time for them to incorporate a tool such as Vault. As well as the fact that every company that even thinks of an IPO in the coming months or years will do very well to put a tool like Vault in place.”

Expanding Vault’s own team is also on the cards after this Series A close, as it guns for the next phase of growth for its own business. Presumably, though, it’s not short of a misconduct reporting solution.

EU to review TikTok’s ToS after child safety complaints

By Natasha Lomas

TikTok has a month to respond to concerns raised by European consumer protection agencies earlier this year, EU lawmakers said today.

The Commission has launched what it described as “a formal dialogue” with the video sharing platform over its commercial practices and policy.

Areas of specific concern include hidden marketing, aggressive advertising techniques targeted at children, and certain contractual terms in TikTok’s policies that could be considered misleading and confusing for consumers, per the Commission.

Commenting in a statement, justice commissioner Didier Reynders added: “The current pandemic has further accelerated digitalisation. This has brought new opportunities but it has also created new risks, in particular for vulnerable consumers. In the European Union, it is prohibited to target children and minors with disguised advertising such as banners in videos. The dialogue we are launching today should support TikTok in complying with EU rules to protect consumers.”

The background to this is that back in February the European Consumer Organisation (BEUC) sent the the Commission a report calling out a number of TikTok’s policies and practices — including what it said were unfair terms and copyright practices. It also flagged the risk of children being exposed to inappropriate content on the platform, and accused TikTok of misleading data processing and privacy practices.

Complaints were filed around the same time by consumer organisations in 15 EU countries — urging those national authorities to investigate the social media giant’s conduct.

The multi-pronged EU action means TikTok has not just the Commission looking at the detail of its small print but is facing questions from a network of national consumer protection authorities — which is being co-led by the Swedish Consumer Agency and the Irish Competition and Consumer Protection Commission (which handles privacy issues related to the platform).

Nonetheless, the BEUC queried why the Commission hasn’t yet launched a formal enforcement procedure.

We hope that the authorities will stick to their guns in this ‘dialogue’ which we understand is not yet a formal launch of an enforcement procedure. It must lead to good results for consumers, tackling all the points that BEUC raised. BEUC also hopes to be consulted before an agreement is reached,” a spokesperson for the organization told us. 

Also reached for comment, TikTok sent us this statement on the Commission’s action, attributed to its director of public policy, Caroline Greer: 

“As part of our ongoing engagement with regulators and other external stakeholders over issues such as consumer protection and transparency, we are engaging in a dialogue with the Irish Consumer Protection Commission and the Swedish Consumer Agency and look forward to discussing the measures we’ve already introduced. In addition, we have taken a number of steps to protect our younger users, including making all under-16 accounts private-by-default, and disabling their access to direct messaging. Further, users under 18 cannot buy, send or receive virtual gifts, and we have strict policies prohibiting advertising directly appealing to those under the age of digital consent.”

The company told us it uses age verification for personalized ads — saying users must have verified that they are 13+ to receive these ads; as well as being over the age of digital consent in their respective EU country; and also having consented to receive targeted ads.

However TikTok’s age verification technology has been criticized as weak before now — and recent emergency child-safety-focused enforcement action by the Italian national data protection agency has led to TikTok having to pledge to strengthen its age verification processes in the country.

The Italian enforcement action also resulted in TikTok removing more than 500,000 accounts suspected of belonging to users aged under 13 earlier this month — raising further questions about whether it can really claim that under-13s aren’t routinely exposed to targeted ads on its platform.

In further background remarks it sent us, TikTok claimed it has clear labelling of sponsored content. But it also noted it’s made some recent changes — such as switching the label it applies on video advertising from ‘sponsored’ to ‘ad’ to make it clearer.

It also said it’s working on a toggle that aims to make it clearer to users when they may be exposed to advertising by other users by enabling the latter users to prominently disclose that their content contains advertising.

TikTok said the tool is currently in beta testing in Europe but it said it expects to move to general availability this summer and will also amend its ToS to require users to use this toggle whenever their content contains advertising. (But without adequate enforcement that may just end up as another overlooked and easily abused setting.)

The company recently announced a transparency center in Europe in a move that looks intended to counter some of the concerns being raised about its business in the region, as well as to prepare it for the increased oversight that’s coming down the pipe for all digital platforms operating in the EU — as the bloc works to update its digital rulebook.

 

SpotOn raises $125M in a16z-led Series D, triples valuation to $1.875B

By Mary Ann Azevedo

Certain industries were hit harder by the COVID-19 pandemic than others, especially in its early days.

Small businesses, including retailers and restaurants, were negatively impacted by lockdowns and the resulting closures. They had to adapt quickly to survive. If they didn’t use much technology before, they were suddenly being forced to, as so many things shifted to digital last year in response to the COVID-19 pandemic. For companies like SpotOn, it was a pivotal moment. 

The startup, which provides software and payments for restaurants and SMBs, had to step up to help the businesses it serves. Not only for their sake, but its own.

“We really took a hard look at what was happening to our clients. And we realized we needed to pivot, just to be able to support them,” co-CEO and co-founder Matt Hyman recalls. “We had to make a decision because our revenues also were taking a big hit, just like our clients were. Rather than lay off staff or require salary deductions, we stayed true to our core values and just kept plugging away.”

All that “plugging away” has paid off. Today, SpotOn announced it has achieved unicorn status with a $125 million Series D funding round led by Andreessen Horowitz (a16z).

Existing backers DST Global, 01 Advisors, Dragoneer Investment Group and Franklin Templeton also participated in the financing, in addition to new investor Mubadala Investment Company. 

Notably, the round triples the company’s valuation to $1.875 billion compared to its $625 million valuation at the time of its Series C raise last September. It also marks San Francisco-based SpotOn’s third funding event since March 2020, and brings the startup’s total funding to $328 million since its 2017 inception.

Its efforts have also led to impressive growth for the company, which has seen its revenue triple since February 2020, according to Hyman.

Put simply, SpotOn is taking on the likes of Square in the payments space. But the company says its offering extends beyond traditional payment processing and point-of-sale (POS) software. Its platform aims to give SMBs the ability to run their businesses “from building a brand to taking payments and everything in-between.” SpotOn’s goal is to be a “one-stop shop” by incorporating tools that include things such as custom website development, scheduling software, marketing, appointment scheduling, review management, analytics and digital loyalty.

When the pandemic hit, SpotOn ramped up and rolled out 400 “new product innovations,” Hyman said. It also did things like waive $1.5 million in fees (it’s a SaaS business, so for several months it waived its monthly fee, for example, for its integrated restaurant management system). It also acquired a company, SeatNinja, so that it could expand its offering.

“Because a lot of these businesses had to go digital literally overnight, we built a free website for them all,” Hyman said. SpotOn also did things like offer commission-free online ordering for restaurants and helped retail merchants update their websites for e-commerce. “Obviously these businesses were resilient,” Hyman said. “But such efforts also created a lot of loyalty.” 

Today, more than 30,000 businesses use SpotOn’s platform, according to Hyman, with nearly 8,000 of those signing on this year. The company expects that number to triple by year’s end.

Currently, its customers are split about 60% retail and 40% restaurants, but the restaurant side of its business is growing rapidly, according to Hyman.

The reason for that, the company believes, is while restaurants initially rushed to add online ordering for delivery or curbside pickup, they soon realized they “wanted a more affordable and more integrated solution.”

Image Credits: SpotOn co-founders Zach Hyman, Doron Friedman and Matt Hyman / SpotOn

What makes SpotOn so appealing to its customers, Hyman said, is the fact that it offers an integrated platform so that businesses that use it can save “thousands of dollars” in payments and software fees to multiple, “à la carte” vendors. But it also can integrate with other platforms if needed.

In addition to growing its customer base and revenue, SpotOn has also boosted its headcount to about 1,250 employees (from 850 in March of 2020). Those employees are spread across its offices in San Francisco, Chicago, Detroit, Denver, Mexico City, Mexico and Krakow, Poland.

SpotOn is not currently profitable, which Hyman says is “by choice.”

“We could be cash flow positive technically whenever we choose to be. Right now we’re just so focused on product innovation and talent to exceed the needs of our clients,” he said. “We chose the capital plan so that we could really just double down on what’s working so well.”

The new capital will go toward further accelerating product development and expanding its market presence.

“We’re doubling down on our single integrated restaurant management system,” Hyman said. 

The raise marks the first time that a16z has put money in the startup, although General Partner David George told TechCrunch he was familiar with co-founders Matt Hyman and Zach Hyman through mutual friends.

George estimates that about 80% of restaurants and SMBs use legacy solutions “that are clunky and outdated, and not very customer friendly.” The COVID-19 pandemic has led to more of these businesses seeking digital options.

“We think we’re in the very early days in the transition [to digital], and the opportunity is massive,” he told TechCrunch. “We believe we’re at the tipping point of a big tech replacement cycle for restaurant and small business software, and at the very early stages of this transition to modern cloud-native solutions.”

George was also effusive in his praise for how SpotOn has executed over the past 14 months.

“There are companies that build great products, and companies that can build great sales teams. And there are companies that offer really great customer service,” he said. “It’s rare that you find two of those and extremely rare to find all three of those as we have in SpotOn.”

AI Can Write Disinformation Now—and Dupe Human Readers

By Will Knight
Georgetown researchers used text generator GPT-3 to write misleading tweets about climate change and foreign affairs. People found the posts persuasive.

Google’s LaMDA makes conversations with AIs more conversational

By Devin Coldewey

As far as AI systems have come in their ability to recognize what you’re saying and respond, they’re still very easily confused unless you speak carefully and literally. Google has been working on a new language model called LaMDA that’s much better at following conversations in a natural way, rather than as a series of badly formed search queries.

LaMDA is meant to be able to converse normally about just about anything without any kind of prior training. This was demonstrated in a pair of rather bizarre conversations with an AI first pretending to be Pluto and then a paper airplane.

While the utility of having a machine learning model that can pretend to be a planet (or dwarf planet, a term it clearly resents) is somewhat limited, the point of the demonstration was to show that LaMDA could carry on a conversation naturally even on this random topic, and in the arbitrary fashion of the first person.

Image Credits: Google

The advance here is basically preventing the AI system from being led off track and losing the thread when attempting to respond to a series of loosely associated questions.

Normal conversations between humans jump between topics and call back to earlier ideas constantly, a practice that confuses language models to no end. But LaMDA can at least hold its own and not crash out with a “Sorry, I don’t understand” or a non-sequitur answer.

While most people are unlikely to want to have a full, natural conversation with their phones, there are plenty of situations where this sort of thing makes perfect sense. Groups like kids and older folks who don’t know or don’t care about the formalized language we use to speak to AI assistants will be able to interact more naturally with technology, for instance. And identity will be important if this sort of conversational intelligence is built into a car or appliance. No one wants to ask “Google” how much milk is left in the fridge, but they might ask “Whirly” or “Fridgadore,” the refrigerator speaking for itself.

Even CEO Sundar Pichai seemed unsure as to what exactly this new conversational AI would be used for, and emphasized that it’s still a work in development. But you can probably expect Google’s AIs to be a little more natural in their interactions going forward. And you can finally have that long, philosophical conversation with a random item you’ve always wanted.

Image Credits: Google

NLPCloud.io helps devs add language processing smarts to their apps

By Natasha Lomas

While visual ‘no code‘ tools are helping businesses get more out of computing without the need for armies of in-house techies to configure software on behalf of other staff, access to the most powerful tech tools — at the ‘deep tech’ AI coal face — still requires some expert help (and/or costly in-house expertise).

This is where bootstrapping French startup, NLPCloud.io, is plying a trade in MLOps/AIOps — or ‘compute platform as a service’ (being as it runs the queries on its own servers) — with a focus on natural language processing (NLP), as its name suggests.

Developments in artificial intelligence have, in recent years, led to impressive advances in the field of NLP — a technology that can help businesses scale their capacity to intelligently grapple with all sorts of communications by automating tasks like Named Entity Recognition, sentiment-analysis, text classification, summarization, question answering, and Part-Of-Speech tagging, freeing up (human) staff to focus on more complex/nuanced work. (Although it’s worth emphasizing that the bulk of NLP research has focused on the English language — meaning that’s where this tech is most mature; so associated AI advances are not universally distributed.)

Production ready (pre-trained) NLP models for English are readily available ‘out of the box’. There are also dedicated open source frameworks offering help with training models. But businesses wanting to tap into NLP still need to have the DevOps resource and chops to implement NLP models.

NLPCloud.io is catering to businesses that don’t feel up to the implementation challenge themselves — offering “production-ready NLP API” with the promise of “no DevOps required”.

Its API is based on Hugging Face and spaCy open-source models. Customers can either choose to use ready-to-use pre-trained models (it selects the “best” open source models; it does not build its own); or they can upload custom models developed internally by their own data scientists — which it says is a point of differentiation vs SaaS services such as Google Natural Language (which uses Google’s ML models) or Amazon Comprehend and Monkey Learn.

NLPCloud.io says it wants to democratize NLP by helping developers and data scientists deliver these projects “in no time and at a fair price”. (It has a tiered pricing model based on requests per minute, which starts at $39pm and ranges up to $1,199pm, at the enterprise end, for one custom model running on a GPU. It does also offer a free tier so users can test models at low request velocity without incurring a charge.)

“The idea came from the fact that, as a software engineer, I saw many AI projects fail because of the deployment to production phase,” says sole founder and CTO Julien Salinas. “Companies often focus on building accurate and fast AI models but today more and more excellent open-source models are available and are doing an excellent job… so the toughest challenge now is being able to efficiently use these models in production. It takes AI skills, DevOps skills, programming skill… which is why it’s a challenge for so many companies, and which is why I decided to launch NLPCloud.io.”

The platform launched in January 2021 and now has around 500 users, including 30 who are paying for the service. While the startup, which is based in Grenoble, in the French Alps, is a team of three for now, plus a couple of independent contractors. (Salinas says he plans to hire five people by the end of the year.)

“Most of our users are tech startups but we also start having a couple of bigger companies,” he tells TechCrunch. “The biggest demand I’m seeing is both from software engineers and data scientists. Sometimes it’s from teams who have data science skills but don’t have DevOps skills (or don’t want to spend time on this). Sometimes it’s from tech teams who want to leverage NLP out-of-the-box without hiring a whole data science team.”

“We have very diverse customers, from solo startup founders to bigger companies like BBVA, Mintel, Senuto… in all sorts of sectors (banking, public relations, market research),” he adds.

Use cases of its customers include lead generation from unstructured text (such as web pages), via named entities extraction; and sorting support tickets based on urgency by conducting sentiment analysis.

Content marketers are also using its platform for headline generation (via summarization). While text classification capabilities are being used for economic intelligence and financial data extraction, per Salinas.

He says his own experience as a CTO and software engineer working on NLP projects at a number of tech companies led him to spot an opportunity in the challenge of AI implementation.

“I realized that it was quite easy to build acceptable NLP models thanks to great open-source frameworks like spaCy and Hugging Face Transformers but then I found it quite hard to use these models in production,” he explains. “It takes programming skills in order to develop an API, strong DevOps skills in order to build a robust and fast infrastructure to serve NLP models (AI models in general consume a lot of resources), and also data science skills of course.

“I tried to look for ready-to-use cloud solutions in order to save weeks of work but I couldn’t find anything satisfactory. My intuition was that such a platform would help tech teams save a lot of time, sometimes months of work for the teams who don’t have strong DevOps profiles.”

“NLP has been around for decades but until recently it took whole teams of data scientists to build acceptable NLP models. For a couple of years, we’ve made amazing progress in terms of accuracy and speed of the NLP models. More and more experts who have been working in the NLP field for decades agree that NLP is becoming a ‘commodity’,” he goes on. “Frameworks like spaCy make it extremely simple for developers to leverage NLP models without having advanced data science knowledge. And Hugging Face’s open-source repository for NLP models is also a great step in this direction.

“But having these models run in production is still hard, and maybe even harder than before as these brand new models are very demanding in terms of resources.”

The models NLPCloud.io offers are picked for performance — where “best” means it has “the best compromise between accuracy and speed”. Salinas also says they are paying mind to context, given NLP can be used for diverse user cases — hence proposing number of models so as to be able to adapt to a given use.

“Initially we started with models dedicated to entities extraction only but most of our first customers also asked for other use cases too, so we started adding other models,” he notes, adding that they will continue to add more models from the two chosen frameworks — “in order to cover more use cases, and more languages”.

SpaCy and Hugging Face, meanwhile, were chosen to be the source for the models offered via its API based on their track record as companies, the NLP libraries they offer and their focus on production-ready framework — with the combination allowing NLPCloud.io to offer a selection of models that are fast and accurate, working within the bounds of respective trade-offs, according to Salinas.

“SpaCy is developed by a solid company in Germany called Explosion.ai. This library has become one of the most used NLP libraries among companies who want to leverage NLP in production ‘for real’ (as opposed to academic research only). The reason is that it is very fast, has great accuracy in most scenarios, and is an opinionated” framework which makes it very simple to use by non-data scientists (the tradeoff is that it gives less customization possibilities),” he says.

Hugging Face is an even more solid company that recently raised $40M for a good reason: They created a disruptive NLP library called ‘transformers’ that improves a lot the accuracy of NLP models (the tradeoff is that it is very resource intensive though). It gives the opportunity to cover more use cases like sentiment analysis, classification, summarization… In addition to that, they created an open-source repository where it is easy to select the best model you need for your use case.”

While AI is advancing at a clip within certain tracks — such as NLP for English — there are still caveats and potential pitfalls attached to automating language processing and analysis, with the risk of getting stuff wrong or worse. AI models trained on human-generated data have, for example, been shown reflecting embedded biases and prejudices of the people who produced the underlying data.

Salinas agrees NLP can sometimes face “concerning bias issues”, such as racism and misogyny. But he expresses confidence in the models they’ve selected.

“Most of the time it seems [bias in NLP] is due to the underlying data used to trained the models. It shows we should be more careful about the origin of this data,” he says. “In my opinion the best solution in order to mitigate this is that the community of NLP users should actively report something inappropriate when using a specific model so that this model can be paused and fixed.”

“Even if we doubt that such a bias exists in the models we’re proposing, we do encourage our users to report such problems to us so we can take measures,” he adds.

 

Pipe, which aims to be the ‘Nasdaq for revenue,’ raises more money at a $2B valuation

By Mary Ann Azevedo

Fast-growing fintech Pipe has raised another round of funding at a $2 billion valuation, just weeks after raising $50M in growth funding, according to sources familiar with the deal.

Although the round is still ongoing, Pipe has reportedly raised $150 million in a “massively oversubscribed” round led by Baltimore, Md.-based Greenspring Associates. While the company has signed a term sheet, more money could still come in, according to the source. Both new and existing investors have participated in the fundraise.

The increase in valuation is “a significant step up” from the company’s last raise. Pipe has declined to comment on the deal.

A little over one year ago, Pipe raised a $6 million seed round led by Craft Ventures to help it pursue its mission of giving SaaS companies a funding alternative outside of equity or venture debt.

The buzzy startup’s goal with the money was to give SaaS companies a way to get their revenue upfront, by pairing them with investors on a marketplace that pays a discounted rate for the annual value of those contracts. (Pipe describes its buy-side participants as “a vetted group of financial institutions and banks.”)

Just a few weeks ago, Miami-based Pipe announced a new raise — $50 million in “strategic equity funding” from a slew of high-profile investors. Siemens’ Next47 and Jim Pallotta’s Raptor Group co-led the round, which also included participation from Shopify, Slack, HubSpot, Okta, Social Capital’s Chamath Palihapitiya, Marc Benioff, Michael Dell’s MSD Capital, Republic, Alexis Ohanian’s Seven Seven Six and Joe Lonsdale.

At that time, Pipe co-CEO and co-founder Harry Hurst said the company was also broadening the scope of its platform beyond strictly SaaS companies to “any company with a recurring revenue stream.” This could include D2C subscription companies, ISP, streaming services or a telecommunications companies. Even VC fund admin and management are being piped on its platform, for example, according to Hurst.

“When we first went to market, we were very focused on SaaS, our first vertical,” he told TC at the time. “Since then, over 3,000 companies have signed up to use our platform.” Those companies range from early-stage and bootstrapped with $200,000 in revenue, to publicly-traded companies.

Pipe’s platform assesses a customer’s key metrics by integrating with its accounting, payment processing and banking systems. It then instantly rates the performance of the business and qualifies them for a trading limit. Trading limits currently range from $50,000 for smaller early-stage and bootstrapped companies, to over $100 million for late-stage and publicly traded companies, although there is no cap on how large a trading limit can be.

In the first quarter of 2021, tens of millions of dollars were traded across the Pipe platform. Between its launch in late June 2020 through year’s end, the company also saw “tens of millions” in trades take place via its marketplace. Tradable ARR on the platform is currently in excess of $1 billion.

Facebook gets a C – Startup rates the ‘ethics’ of social media platforms, targets asset managers

By Mike Butcher

By now you’ve probably heard of ESG (Environmental, Social, Governance) ratings for companies, or ratings for their carbon footprint. Well, now a UK company has come up with a way of rating the ‘ethics’ social media companies. 
  
EthicsGrade is an ESG ratings agency, focusing on AI governance. Headed up Charles Radclyffe, the former head of AI at Fidelity, it uses AI-driven models to create a more complete picture of the ESG of organizations, harnessing Natural Language Processing to automate the analysis of huge data sets. This includes tracking controversial topics, and public statements.

Frustrated with the green-washing of some ‘environmental’ stocks, Radclyffe realized that the AI governance of social media companies was not being properly considered, despite presenting an enormous risk to investors in the wake of such scandals as the manipulation of Facebook by companies such as Cambridge Analytica during the US Election and the UK’s Brexit referendum.

EthicsGrade Industry Summary Scorecard – Social Media

The idea is that these ratings are used by companies to better see where they should improve. But the twist is that asset managers can also see where the risks of AI might lie.

Speaking to TechCrunch he said: “While at Fidelity I got a reputation within the firm for being the go-to person, for my colleagues in the investment team, who wanted to understand the risks within the technology firms that we were investing in. After being asked a number of times about some dodgy facial recognition company or a social media platform, I realized there was actually a massive absence of data around this stuff as opposed to anecdotal evidence.”

He says that when he left Fidelity he decided EthicsGrade would out to cover not just ESGs but also AI ethics for platforms that are driven by algorithms.

He told me: “We’ve built a model to analyze technology governance. We’ve covered 20 industries. So most of what we’ve published so far has been non-tech companies because these are risks that are inherent in many other industries, other than simply social media or big tech. But over the next couple of weeks, we’re going live with our data on things which are directly related to tech, starting with social media.”

Essentially, what they are doing is a big parallel with what is being done in the ESG space.

“The question we want to be able to answer is how does Tik Tok compare against Twitter or Wechat as against WhatsApp. And what we’ve essentially found is that things like GDPR have done a lot of good in terms of raising the bar on questions like data privacy and data governance. But in a lot of the other areas that we cover, such as ethical risk or a firm’s approach to public policy, are indeed technical questions about risk management,” says Radclyffe.

But, of course, they are effectively rating algorithms. Are the ratings they are giving the social platforms themselves derived from algorithms? EthicsGrade says they are training their own AI through NLP as they go so that they can automate what is currently very human analysts centric, just as ‘sustainalytics’ et al did years ago in the environmental arena.

So how are they coming up with these ratings? EthicsGrade says are evaluating “the extent to which organizations implement transparent and democratic values, ensure informed consent and risk management protocols, and establish a positive environment for error and improvement.” And this is all achieved, they say, all through publicly available data – policy, website, lobbying etc. In simple terms, they rate the governance of the AI not necessarily the algorithms themselves but what checks and balances are in place to ensure that the outcomes and inputs are ethical and managed.

“Our goal really is to target asset owners and asset managers,” says Radclyffe. “So if you look at any of these firms like, let’s say Twitter, 29% of Twitter is owned by five organizations: it’s Vanguard, Morgan Stanley, Blackrock, State Street and ClearBridge. If you look at the ownership structure of Facebook or Microsoft, it’s the same firms: Fidelity, Vanguard and BlackRock. And so really we only need to win a couple of hearts and minds, we just need to convince the asset owners and the asset managers that questions like the ones journalists have been asking for years are pertinent and relevant to their portfolios and that’s really how we’re planning to make our impact.”

Asked if they look at content of things like Tweets, he said no: “We don’t look at content. What we concern ourselves is how they govern their technology, and where we can find evidence of that. So what we do is we write to each firm with our rating, with our assessment of them. We make it very clear that it’s based on publicly available data. And then we invite them to complete a survey. Essentially, that survey helps us validate data of these firms. Microsoft is the only one that’s completed the survey.”

Ideally, firms will “verify the information, that they’ve got a particular process in place to make sure that things are well-managed and their algorithms don’t become discriminatory.”

In an age increasingly driven by algorithms, it will be interesting to see if this idea of rating them for risk takes off, especially amongst asset managers.

Arm announces the next generation of its processor architecture

By Frederic Lardinois

Arm today announced Armv9, the next generation of its chip architecture. Its predecessor, Armv8 launched a decade ago and while it has seen its fair share of changes and updates, the new architecture brings a number of major updates to the platform that warrant a shift in version numbers. Unsurprisingly, Armv9 builds on V8 and is backward compatible, but it specifically introduces new security, AI, signal processing and performance features.

Over the last five years, more than 100 billion Arm-based chips have shipped. But Arm believes that its partners will ship over 300 billion in the next decade. We will see the first ArmV9-based chips in devices later this year.

Ian Smythe, Arm’s VP of Marketing for its client business, told me that he believes this new architecture will change the way we do computing over the next decade. “We’re going to deliver more performance, we will improve the security capabilities […] and we will enhance the workload capabilities because of the shift that we see in compute that’s taking place,” he said. “The reason that we’ve taken these steps is to look at how we provide the best experience out there for handling the explosion of data and the need to process it and the need to move it and the need to protect it.”

That neatly sums up the core philosophy behind these updates. On the security side, ArmV9 will introduce Arm’s confidential compute architecture and the concept of Realms. These Realms enable developers to write applications where the data is shielded from the operating system and other apps on the device. Using Realms, a business application could shield sensitive data and code from the rest of the device, for example.

Image Credits: Arm

“What we’re doing with the Arm Confidential Compute Architecture is worrying about the fact that all of our computing is running on the computing infrastructure of operating systems and hypervisors,” Richard Grisenthwaite, the chief architect at Arm, told me. “That code is quite complex and therefore could be penetrated if things go wrong. And it’s in an incredibly trusted position, so we’re moving some of the workloads so that [they are] running on a vastly smaller piece of code. Only the Realm manager is the thing that’s actually capable of seeing your data while it’s in action. And that would be on the order of about a 10th of the size of a normal hypervisor and much smaller still than an operating system.”

As Grisenthwaite noted, it took Arm a few years to work out the details of this security architecture and ensure that it is robust enough — and during that time Spectre and Meltdown appeared, too, and set back some of Arm’s initial work because some of the solutions it was working on would’ve been vulnerable to similar attacks.

Image Credits: Arm

Unsurprisingly, another area the team focused on was enhancing the CPU’s AI capabilities. AI workloads are now ubiquitous. Arm had already done introduced its Scalable Vector Extension (SVE) a few years ago, but at the time, this was meant for high-performance computing solutions like the Arm-powered Fugaku supercomputer.

Now, Arm is introducing SVE2 to enable more AI and digital signal processing (DSP) capabilities. Those can be used for image processing workloads, as well as other IoT and smart home solutions, for example. There are, of course, dedicated AI chips on the market now, but Arm believes that the entire computing stack needs to be optimized for these workloads and that there are a lot of use cases where the CPU is the right choice for them, especially for smaller workloads.

“We regard machine learning as appearing in just about everything. It’s going to be done in GPUs, it’s going to be done in dedicated processors, neural processors, and also done in our CPUs. And it’s really important that we make all of these different components better at doing machine learning,” Grisenthwaite said.

As for raw performance, Arm believes its new architecture will allow chip manufacturers to gain more than 30% in compute power over the next two chip generations, both for mobile CPUs but also the kind of infrastructure CPUs that large cloud vendors like AWS now offer their users.

“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow’s mobile communications devices,” said Min Goo Kim, the executive vice president of SoC development at Samsung Electronics. “As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.”

❌