FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Google’s Gradient Ventures leads $8.2M Series A for Vault Platform’s misconduct reporting SaaS

By Natasha Lomas

Fixing workplace misconduct reporting is a mission that’s snagged London-based Vault Platform backing from Google’s AI focused fund, Gradient Ventures, which is the lead investor in an $8.2 million Series A that’s being announced today.

Other investors joining the round are Illuminate Financial, along with existing investors including Kindred Capital and Angular Ventures. Its $4.2M seed round was closed back in 2019.

Vault sells a suite of SaaS tools to enterprise-sized or large/scale-up companies to support them to pro-actively manage internal ethics and integrity issues. As well as tools for staff to report issues, data and analytics is baked into the platform — so it can support with customers’ wider audit and compliance requirements.

In an interview with TechCrunch, co-founder and CEO Neta Meidav said that as well as being wholly on board with the overarching mission to upgrade legacy reporting tools like hotlines provided to staff to try to surface conduct-related workplace risks (be that bullying and harassment; racism and sexism; or bribery, corruption and fraud), as you might expect Gradient Ventures was interested in the potential for applying AI to further enhance Vault’s SaaS-based reporting tool.

A feature of its current platform, called ‘GoTogether’, consists of an escrow system that allows users to submit misconduct reports to the relevant internal bodies but only if they are not the first or only person to have made a report about the same person — the idea being that can help encourage staff (or outsiders, where open reporting is enabled) to report concerns they may otherwise hesitate to, for various reasons.

Vault now wants to expand the feature’s capabilities so it can be used to proactively surface problematic conduct that may not just relate to a particular individual but may even affect a whole team or division — by using natural language processing to help spot patterns and potential linkages in the kind of activity being reported.

“Our algorithms today match on an alleged perpetrator’s identity. However many events that people might report on are not related to a specific person — they can be more descriptive,” explains Meidav. “For example if you are experiencing some irregularities in accounting in your department, for example, and you’re suspecting that there is some sort of corruption or fraudulent activity happening.”

“If you think about the greatest [workplace misconduct] disasters and crises that happened in recent years — the Dieselgate story at Volkswagen, what happened in Boeing — the common denominator in all these cases is that there’s been some sort of a serious ethical breach or failure which was observed by several people within the organization in remote parts of the organization. And the dots weren’t connected,” she goes on. “So the capacity we’re currently building and increasing — building upon what we already have with GoTogether — is the ability to connect on these repeated events and be able to connect and understand and read the human input. And connect the dots when repeated events are happening — alerting companies’ boards that there is a certain ‘hot pocket’ that they need to go and investigate.

“That would save companies from great risk, great cost, and essentially could prevent huge loss. Not only financial but reputational, sometimes it’s even loss to human lives… That’s where we’re getting to and what we’re aiming to achieve.”

There is the question of how defensible Vault’s GoTogether feature is — how easily it could be copied — given you can’t patent an idea. So baking in AI smarts may be a way to layer added sophistication to try to maintain a competitive edge.

“There’s some very sophisticated, unique technology there in the backend so we are continuing to invest in this side of our technology. And Gradient’s investment and the specific we’re receiving from Google now will only increase that element and that side of our business,” says Meidav when we ask about defensibility.

Commenting on the funding in a statement, Gradient Ventures founder and managing partner, Anna Patterson, added: “Vault tackles an important space with an innovative and timely solution. Vault’s application provides organizations with a data-driven approach to tackling challenges like occupational fraud, bribery or corruption incidents, safety failures and misconduct. Given their impressive team, technology, and customer traction, they are poised to improve the modern workplace.”

The London-based startup was only founded in 2018 — and while it’s most keen to talk about disrupting legacy hotline systems, which offer only a linear and passive conduit for misconduct reporting, there are a number of other startups playing in the same space. Examples include the likes of LA-based AllVoices, YC-backed WhispliHootsworth and Spot to name a few.

Competition seems likely to continue to increase as regulatory requirements around workplace reporting keep stepping up.

The incoming EU Whistleblower Protection Directive is one piece of regulation Vault expects will increase demand for smarter compliance solutions — aka “TrustTech”, as it seeks to badge it — as it will require companies of more than 250 employees to have a reporting solution in place by the end of December 2021, encouraging European businesses to cast around for tools to help shrink their misconduct-related risk.

She also suggests a platform solution can help bridge gaps between different internal teams that may need to be involved in addressing complaints, as well as helping to speed up internal investigations by offering the ability to chat anonymously with the original reporter.

Meidav also flags the rising attention US regulators are giving to workplace misconduct reporting — noting some recent massive awards by the SEC to external whistleblowers, such as the $28M paid out to a single whistleblower earlier this year (in relation to the Panasonic Avionics consultant corruption case).

She also argues that growing numbers of companies going public (such as via the SPAC trend, where there will have been reduced regulatory scrutiny ahead of the ‘blank check’ IPO) raises reporting requirements generally — meaning, again, more companies will need to have in place a system operated by a third party which allows anonymous and non-anonymous reporting. (And, well, we can only speculate whether companies going public by SPAC may be in greater need of misconduct reporting services vs companies that choose to take a more traditional and scrutinized route to market… )

“Just a few years back I had to convince investors that this category it really is a category — and fast forward to 2021, congratulations! We have a market here. It’s a growing category and there is competition in this space,” says Meidav.

“What truly differentiates Vault is that we did not just focus on digitizing an old legacy process. We focused on leveraging technology to truly empower more misconduct to surface internally and for employees to speak up in ways that weren’t available for them before. GoTogether is truly unique as well as the things that we’re doing on the operational side for a company — such as collaboration.”

She gives an example of how a customer in the oil and gas sector configured the platform to make use of an anonymous chat feature in Vault’s app so they could provide employees with a secure direct-line to company leadership.

“They’ve utilizing the anonymous chat that the app enables for people to have a direct line to leadership,” she says. “That’s incredible. That is such a progress, forward looking way to be utilizing this tool.”

Vault Platform’s suite of tools include an employee app and a Resolution Hub for compliance, HR, risk and legal teams (Image credits: Vault Platform)

Meidav says Vault has around 30 customers at this stage, split between the US and EU — its core regions of focus.

And while its platform is geared towards enterprises, its early customer base includes a fair number of scale-ups — with familiar names like Lemonade, Airbnb, Kavak, G2 and OVO Energy on the list.

Scale ups may be natural customers for this sort of product given the huge pressures that can be brought to bear upon company culture as a startup switches to expanding headcount very rapidly, per Meidav.

“They are the early adopters and they are also very much sensitive to events such as these kind of [workplace] scandals as it can impact them greatly… as well as the fact that when a company goes through a hyper growth — and usually you see hyper growth happening in tech companies more than in any other type of sector — hyper growth is at time when you really, as management, as leadership, it’s really important to safeguard your culture,” she suggests.

“Because it changes very, very quickly and these changes can lead to all sorts of things — and it’s really important that leadership is on top of it. So when a company goes through hyper growth it’s an excellent time for them to incorporate a tool such as Vault. As well as the fact that every company that even thinks of an IPO in the coming months or years will do very well to put a tool like Vault in place.”

Expanding Vault’s own team is also on the cards after this Series A close, as it guns for the next phase of growth for its own business. Presumably, though, it’s not short of a misconduct reporting solution.

AI Can Write Disinformation Now—and Dupe Human Readers

By Will Knight
Georgetown researchers used text generator GPT-3 to write misleading tweets about climate change and foreign affairs. People found the posts persuasive.

Google’s LaMDA makes conversations with AIs more conversational

By Devin Coldewey

As far as AI systems have come in their ability to recognize what you’re saying and respond, they’re still very easily confused unless you speak carefully and literally. Google has been working on a new language model called LaMDA that’s much better at following conversations in a natural way, rather than as a series of badly formed search queries.

LaMDA is meant to be able to converse normally about just about anything without any kind of prior training. This was demonstrated in a pair of rather bizarre conversations with an AI first pretending to be Pluto and then a paper airplane.

While the utility of having a machine learning model that can pretend to be a planet (or dwarf planet, a term it clearly resents) is somewhat limited, the point of the demonstration was to show that LaMDA could carry on a conversation naturally even on this random topic, and in the arbitrary fashion of the first person.

Image Credits: Google

The advance here is basically preventing the AI system from being led off track and losing the thread when attempting to respond to a series of loosely associated questions.

Normal conversations between humans jump between topics and call back to earlier ideas constantly, a practice that confuses language models to no end. But LaMDA can at least hold its own and not crash out with a “Sorry, I don’t understand” or a non-sequitur answer.

While most people are unlikely to want to have a full, natural conversation with their phones, there are plenty of situations where this sort of thing makes perfect sense. Groups like kids and older folks who don’t know or don’t care about the formalized language we use to speak to AI assistants will be able to interact more naturally with technology, for instance. And identity will be important if this sort of conversational intelligence is built into a car or appliance. No one wants to ask “Google” how much milk is left in the fridge, but they might ask “Whirly” or “Fridgadore,” the refrigerator speaking for itself.

Even CEO Sundar Pichai seemed unsure as to what exactly this new conversational AI would be used for, and emphasized that it’s still a work in development. But you can probably expect Google’s AIs to be a little more natural in their interactions going forward. And you can finally have that long, philosophical conversation with a random item you’ve always wanted.

Image Credits: Google

Duolingo can’t teach you how to speak a language, but now it wants to try

By Natasha Mascarenhas

Duolingo has been wildly successful. It has pulled in 500 million total registered learners, 40 million active users, 1.5 million premium subscribers and $190 million in booked revenues in 2020. It has a popular and meme-ified mascot in the form of the owl Duo, a creative and engaging product, and ambitious plans for expansion.There’s just one key question in the midst of all those milestones: Does anyone actually learn a language using Duolingo?

“Language is first and foremost a social, relational phenomenon,” said Sébastien Dubreil, a teaching professor at Carnegie Mellon University. “It is something that allows people to make meaning and talk to each other and conduct the business of living — and when you do this, you use a tone of different kinds of resources that are not packaged in the vocabulary and grammar.”

Duolingo CEO and co-founder Luis von Ahn estimates that Duolingo’s upcoming product developments will get users from zero to a knowledge job in a different language within the next two to three years. But for now, he is honest about the limits of the platform today.

“I won’t say that with Duolingo, you can start from zero and make your English as good as mine,” he said. “That’s not true. But that’s also not true with learning a language in a university, that’s not true with buying books, that’s not true with any other app.”

Luis von Ahn, the co-founder of Duolingo, visiting President Obama in 2015. Image Credits: Duolingo

While Dubreil doesn’t think Duolingo can teach someone to speak a language, he does think it has taught consistency — a hard nut to crack in edtech. “What Duolingo does is to potentially entice students to do things you cannot pay them enough time to actually do, which is to spend time in that textbook and reinforce vocabulary and the grammar,” he said.

That’s been the key focus for the company since the beginning. “I said this when we started Duolingo and I still really strongly believe it: The hardest thing about learning a language is staying motivated,” von Ahn said, comparing it to how people approach exercise: it’s hard to stay motivated, but a little motion a day goes a long way.

With an enviable lead in its category, Duolingo wants to bring the quality and effectiveness of its curriculum on par with the quality of its product and branding. With growth and monetization secured, Duolingo is no longer in survival mode. Instead, it’s in study mode.

In this final part, we will explore how Duolingo is using a variety of strategies, from rewriting its courses to what it dubs Operation Birdbrain, to become a more effective learning tool, all while balancing the need to keep the growth and monetization engines stoked while en route to an IPO.

Duolingo’s office decor. Image Credits: Duolingo

“Just a funny game that is maybe not as bad as Candy Crush.”

Duolingo’s competitors see the app’s massive gamification and solitary experience as inherently contradictory with high-quality language education. Busuu and Babbel, two subscription-based competitors in the market, both focus on users talking in real time to native speakers.

Bernhard Niesner, the co-founder and CEO of Busuu, which was founded in 2008, sees Duolingo as an entry-level tool that can help users migrate to its human-interactive service. “If you want to be fluent, Duolingo needs innovation,” Niesner said. “And that’s where we come in: We all believe that you should not be learning a language just by yourself, but [ … ] together, which is our vision.” Busuu has more than 90 million users worldwide.

Duolingo has been the subject of a number of efficacy studies over the years. One of its most positive reports, from September 2020, showed that its Spanish and French courses teach the equivalent of four U.S. university semesters in half the time.

Babbel, which has sold over 10 million subscriptions to its language-learning service, cast doubt on the power of these findings. Christian Hillemeyer, who heads PR for the startup, pointed out that Duolingo only tested for reading and writing efficacy — not for speaking proficiency, even though that is a key part of language learning. He described Duolingo as “just a funny game that is maybe not as bad as Candy Crush.”

Putting the ed back into edtech

One of the ironic legacies of Duolingo’s evolution is that for years it outsourced much of the creation of its education curriculum to volunteers. It’s a legacy the company is still trying to rectify.

The year after its founding, Duolingo launched its Language Incubator in 2013. Similar to its original translation service, the company wanted to leverage crowdsourcing to invent and refine new language courses. Volunteers — at least at first — were seen as a warm-but-scrappy way to bring new material to the growing Duolingo community and more than 1,000 volunteers have helped bring new language courses to the app.

SLAIT’s real-time sign language translation promises more accessible online communication

By Devin Coldewey

Sign language is used by millions of people around the world, but unlike Spanish, Mandarin or even Latin, there’s no automatic translation available for those who can’t use it. SLAIT claims the first such tool available for general use, which can translate around 200 words and simple sentences to start — using nothing but an ordinary computer and webcam.

People with hearing impairments, or other conditions that make vocal speech difficult, number in the hundreds of millions, rely on the same common tech tools as the hearing population. But while emails and text chat are useful and of course very common now, they aren’t a replacement for face-to-face communication, and unfortunately there’s no easy way for signing to be turned into written or spoken words, so this remains a significant barrier.

We’ve seen attempts at automatic sign language (usually American/ASL) translation for years and years. In 2012 Microsoft awarded its Imagine Cup to a student team that tracked hand movements with gloves; in 2018 I wrote about SignAll, which has been working on a sign language translation booth using multiple cameras to give 3D positioning; and in 2019 I noted that a new hand-tracking algorithm called MediaPipe, from Google’s AI labs, could lead to advances in sign detection. Turns out that’s more or less exactly what happened.

SLAIT is a startup built out of research done at the Aachen University of Applied Sciences in Germany, where co-founder Antonio Domènech built a small ASL recognition engine using MediaPipe and custom neural networks. Having proved the basic notion, Domènech was joined by co-founders Evgeny Fomin and William Vicars to start the company; they then moved on to building a system that could recognize first 100, and now 200 individual ASL gestures and some simple sentences. The translation occurs offline, and in near real time on any relatively recent phone or computer.

Animation showing ASL signs being translated to text, and spoken words being transcribed to text back.

Image Credits: SLAIT

They plan to make it available for educational and development work, expanding their dataset so they can improve the model before attempting any more significant consumer applications.

Of course, the development of the current model was not at all simple, though it was achieved in remarkably little time by a small team. MediaPipe offered an effective, open-source method for tracking hand and finger positions, sure, but the crucial component for any strong machine learning model is data, in this case video data (since it would be interpreting video) of ASL in use — and there simply isn’t a lot of that available.

As they recently explained in a presentation for the DeafIT conference, the first team evaluated using an older Microsoft database, but found that a newer Australian academic database had more and better quality data, allowing for the creation of a model that is 92 percent accurate at identifying any of 200 signs in real time. They have augmented this with sign language videos from social media (with permission, of course) and government speeches that have sign language interpreters — but they still need more.

Animated image of a woman saying "deaf understand hearing" in ASL.

A GIF showing one of the prototypes in action — the consumer product won’t have a wireframe, obviously. Image Credits: SLAIT

Their intention is to make the platform available to the deaf and ASL learner communities, who hopefully won’t mind their use of the system being turned to its improvement.

And naturally it could prove an invaluable tool in its present state, since the company’s translation model, even as a work in progress, is still potentially transformative for many people. With the amount of video calls going on these days and likely for the rest of eternity, accessibility is being left behind — only some platforms offer automatic captioning, transcription, summaries, and certainly none recognize sign language. But with SLAIT’s tool people could sign normally and participate in a video call naturally rather than using the neglected chat function.

“In the short term, we’ve proven that 200 word models are accessible and our results are getting better every day,” said SLAIT’s Evgeny Fomin. “In the medium term, we plan to release a consumer facing app to track sign language. However, there is a lot of work to do to reach a comprehensive library of all sign language gestures. We are committed to making this future state a reality. Our mission is to radically improve accessibility for the Deaf and hard of hearing communities.”

From left, Evgeny Fomin, Antonio Domènech and Bill Vicars. Image Credits: SLAIT

He cautioned that it will not be totally complete — just as translation and transcription in or to any language is only an approximation, the point is to provide practical results for millions of people, and a few hundred words goes a long way toward doing so. As data pours in, new words can be added to the vocabulary, and new multigesture phrases as well, and performance for the core set will improve.

Right now the company is seeking initial funding to get its prototype out and grow the team beyond the founding crew. Fomin said they have received some interest but want to make sure they connect with an investor who really understands the plan and vision.

When the engine itself has been built up to be more reliable by the addition of more data and the refining of the machine learning models, the team will look into further development and integration of the app with other products and services. For now the product is more of a proof of concept, but what a proof it is — with a bit more work SLAIT will have leapfrogged the industry and provided something that deaf and hearing people both have been wanting for decades.

Yak Tack is a super simple app to boost vocabulary

By Natasha Lomas

Word nerds with a love for linguistic curiosities and novel nomenclature that’s more fulsome than their ability to make interesting new terms stick will be thrilled by Yak Tack: A neat little aidemémoire (in Android and iOS app form) designed for expanding (English) vocabulary, either as a native speaker or language learner.

Yak Tack uses adaptive spaced repetition to help users remember new words — drawing on a system devised in the 1970s by German scientist Sebastian Leitner.

The app’s core mechanic is a process it calls ‘tacking’. Here’s how it works: A user comes across a new word and inputs it into Yak Tack to look up what it means (definition content for words and concepts is sourced from Oxford, Merriam-Webster, and Wikpedia via their API, per the developer).

Now they can choose to ‘tack’ the word to help them remember it.

This means the app will instigate its system of space repetition to combat the routine problem of memory decay/forgetting, as new information tends to be jettisoned by our brains unless we make a dedicated effort to remember it (and/or events conspire to make it memorable for other, not necessarily very pleasant reasons).

Tacked words are shown to Yak Tack users via push notification at spaced intervals (after 1 day, 2,3,5,8, and 13; following the fibonacci sequence).

Tapping on the notification takes the user to their in-app Tack Board where they get to re-read the definition. It also displays all the words they’ve tacked and their progress in the learning sequence for each one.

After the second repeat of a word there’s a gamified twist as the user must select the correct definition or synonym — depending on how far along in the learning sequence they are — from a multiple-choice list.

Picking the right answer means the learning proceeds to the next fibonacci interval. An incorrect answer moves the user back to the previous interval — meaning they must repeat that step, retightening (instead of expanding) the information-exposure period; hence adaptive space repetition.

It’s a simple and neat use of digital prompts to help make new words stick.

[gallery ids="2139025,2139022,2139023,2139024,2139026"]

The app also has a simple and neat user interface. It actually started as an email-only reminder system, says developer Jeremy Thomas, who made the tool for himself, wanting to expand his own vocabulary — and was (intentionally) the sole user for the first six months after it launched in 2019. (He was also behind an earlier (now discontinued) vocabulary app called Ink Paste.)

For now Yak Tack is a side/passion project so he can keep coding (and indulge his “entrepreneurial proclivities”, as he wordily puts it), his day job being head of product engineering at Gusto. But he sees business potential in bootstrapping the learning tool — and has incorporated it as an LLC.

“We have just over 500 users spread across the world (17 different timezones). We’re biggest in Japan, Germany, and the U.S.,” he tells TechCrunch.

“I’m funding it myself and have no plans to take on investment. I’ve learned to appreciate technology companies that have an actual business model underneath them,” he adds. “There’s an elegance to balancing growth and business fundamentals, and given the low cost of starting a SaaS business, I’m surprised more companies don’t bootstrap, frankly.”

The email-only version of Yak Tack still works (you send an email to word@yaktack.com with the word you’d like to learn as the subject and the spaced repeats happen in the same sequence — but over email). But the mobile app is much more popular, per Thomas.

It is also (inevitably) more social, showing users words tacked by other users who tacked the same word as them — so there’s a bit of word discovery serendipity thrown in. However the user who will get the most out of Yak Tack is definitely the voracious and active reader who’s ingesting a lot of text elsewhere and taking the time to look up (and tack) new and unfamiliar words as they find them.

The app itself doesn’t do major lifting on the word discovery front — but it will serve up random encounters by showing you lists of latest tacks, most-tacked this month and words from any other users you follow. (There’s also a ‘last week’s most tacked words’ notification sent weekly.)

Taking a step back, one of the cruel paradoxes of the COVID-19 pandemic is that while it’s made education for kids harder, as schooling has often been forced to go remote, it’s given many stuck-at-home adults more time on their hands than usual to put their mind to learning new stuff — which explains why online language learning has seen an uplift over the past 12 months+.

And with the pandemic remaining the new dystopian ‘normal’ in most parts of the world, market conditions seem pretty conducive for a self-improvement tool like Yak Tack.

“We’ve seen a lot of good user growth during the pandemic, in large part because I think people are investing in themselves. I think that makes the timing right for an app like Yak Tack,” says Thomas.

Yak Tack is freemium, with free usage for five active tacks (and a queue system for any other words you add); or $5 a year for unlimited tacks and no queue.

“I figure the worldwide TAM [total addressable market] of English-learners is really big, and at that low price point Yak Tack is both accessible and is a huge business opportunity,” he adds.

Lingoda, an on-demand online language school with live instructors and Zoom classrooms, raises $68M

By Ingrid Lunden

A startup out of Berlin that’s built and grown a successful online language learning platform based around live teachers and virtual classrooms is announcing some funding today to continue expanding its business.

Lingoda, which connects students who want to learn a language — currently English, Spanish, French or German — with native-speaking teachers who run thousands of 24/7 live, immersion classes across a range of language levels, has picked up $68 million (€57 million). CEO Michael Shangkuan said the funding will be used both to continue enhancing its tech platform — with more tools for teachers and asynchronous supplementary material — and to widen its footprint in markets further afield such as the U.S.

The company currently has some 70,000 students, 1,400 teachers and runs more than 450,000 classes each year covering some 2,000 lessons. Shangkuan said that its revenue run rate is at 10x that of a year ago, and its customer base in that time grew 200% with students across 200 countries, so it is not a stranger to scaling as it doubles down on the model.

“We want the whole world to be learning languages,” Shangkuan said. “That is our vision.”

The funding is being led by Summit Partners, with participation from existing investor Conny Boersch, founder of Mountain Partners. The valuation is not being disclosed.

Founded in 2015 by two brothers — Fabian and Felix Wunderlich (now respectively CFO and head of sales) — Lingoda had only raised around $15 million before now, a mark of the company being pretty capital efficient.

“We only run classes that are profitable,” said Shangkuan (who is from the US, New Jersey specifically) in an interview. That being said, he added, “We can’t answer if we are profitable, but we’re not hugely unprofitable.” The market for language learning globally is around $50 billion so it’s a big opportunity despite the crowds of competition.

A lot of the innovation in edtech in recent years has been focused around automated tools to help people learn better in virtual environments: technology built with scale, better analytics or knowledge acquisition in mind.

So it’s interesting to come across edtech startups that may be using some of these same tools — the whole of Lingoda is based on Zoom, which it uses to run all of its classes online, and it’s keen to bring more analytics and other tech into the equation to improve learning between lessons, to help teachers get a better sense of students’ engagement and progress during class, and to more — but are fundamentally also retaining one of the more traditional aspects of learning, humans teaching other humans.

This is very much by design, Shangkuan said. At first, the idea was to disrupt in-person language schools, but if the startup had ever considered how and if it would pivot to more automated classes and cut the teachers out of the equation, it decided that it wasn’t worth it.

Shangkuan — himself a language enthusiast who moved himself to Germany specifically to immerse himself in a new country and language, from where he then proceeded to look for a job — noted that feedback from its students showed a strong inclination and preference for human teachers, with 97% saying that language learning in the Lingoda format has been more effective for them than the wave of language apps (which include the likes of Duolingo, Memrise, Busuu, Babbel, Rosetta and many more).

“For me as an entrepreneur trying to provide a great product, that is the bellwether, and why we are focused on delivering on our original vision,” he said, “one in which it does take teachers and real quality experiences and being able to repeat that online.” Indeed, it’s not the only tech startup that’s identified this model: VIPKid out of China and a number of others have also based learning around live teachers.

There are a number of reasons for why human teaching may be more suitable for language acquisition — starting with the fact that language is a living knowledge and so learning to speak it requires a pretty fundamental level of engagement from the learner.

Added to that is the fact that the language is almost never spoken in life in the same way that it is in textbooks (or apps) so hearing from a range of people speaking the language, as you do with the Lingoda format, which is not focused on matching a student with a single instructor (there is no Peloton-style following around instructors here), works very well.

On the subject of the teachers, it’s an interesting format that taps a little into the concept of the gig economy, although it’s not the same as being employed as a delivery driver or cleaner.

Lingoda notes that teachers set their own schedules and call classes themselves, rather than being ordered into them. Students meanwhile pay for courses along a sliding scale depending on various factors like whether you opt for group or one-to-one classes, how frequently you use the service, and which language you are learning, with per-classes prices typically ranging between $6.75 and $14.30 depending on what you choose.

Students can request a teaching level if they want it: there is always a wide selection yet with dozens of levels between basic A1 and advanced C1 proficiency, if you don’t find what you want and order it, it can take between a day and a week for it to materialise, typically with 1-5 students per class. But in any case, a teacher needs to set the class herself or himself. This format makes it fall into more standardized language learning labor models.

“We closely mirror the business model of traditional (brick and mortar) in-person language schools, where teachers work part time in compliance with local laws and have the flexibility to schedule their own classes,” a spokesperson said. “The main difference is that our model brings in-person classes online, but we are still following the same local guidelines.”

After students complete a course, Lingoda provides them with a certification. In English, you can take a recognized Cambridge assessment to verify your proficiency.

Lingoda’s growth is coming at an interesting moment in the world of online education, which has been one of the big juggernauts of the last year. Schools shutting down in-person learning, people spending more time at home, and the need for many of us to feel like we are doing something at a time of so many restrictions have all driven people to spend time learning online have all driven edtech companies to expand, and the technology that’s being used for the purpose to continue evolving.

To be clear, Lingoda has been around for years and was not hatched out of pandemic conditions: many of the learners that it has attracted are those who might have otherwise attended an in-person language class run by one of the many smaller schools you might come across in a typical city (London has hundreds of them), learning because they are planning to relocate or study abroad, or because people have newly arrived in a country and need to learn the language to get by, or they have to learn it for work.

But what’s been interesting in this last year is how services created for one kind of environment have been taken up in our “new normal.” The classes that Lingoda offers become a promise of a moment when we will be able to visit more places again, and hopefully order coffees, argue about jaywalkers, and chat with strangers here and there a little more easily.

“The language learning market is increasingly shifting to online offerings that provide consumers with a more convenient, flexible and cost-effective way to improve their foreign language skills,” said Matthias Allgaier, MD at Summit Partners, in a statement. “We believe Lingoda has developed one of the most comprehensive and effective online language learning solutions globally and is positioned to benefit from the ongoing and accelerating trend of digitization in education. We are thrilled to partner with the entire Lingoda team, and we are excited about the future for this business.” Allgaier is joining Lingoda’s board with this round.

Updated with an additional investor and a slight change in funding amount due to conversion rates.

Docugami’s new model for understanding documents cuts its teeth on NASA archives

By Devin Coldewey

You hear so much about data these days that you might forget that a huge amount of the world runs on documents: a veritable menagerie of heterogeneous files and formats holding enormous value yet incompatible with the new era of clean, structured databases. Docugami plans to change that with a system that intuitively understands any set of documents and intelligently indexes their contents — and NASA is already on board.

If Docugami’s product works as planned, anyone will be able to take piles of documents accumulated over the years and near-instantly convert them to the kind of data that’s actually useful to people.

Because it turns out that running just about any business ends up producing a ton of documents. Contracts and briefs in legal work, leases and agreements in real estate, proposals and releases in marketing, medical charts, etc, etc. Not to mention the various formats: Word docs, PDFs, scans of paper printouts of PDFs exported from Word docs, and so on.

Over the last decade there’s been an effort to corral this problem, but movement has largely been on the organizational side: put all your documents in one place, share and edit them collaboratively. Understanding the document itself has pretty much been left to the people who handle them, and for good reason — understanding documents is hard!

Think of a rental contract. We humans understand when the renter is named as Jill Jackson, that later on, “the renter” also refers to that person. Furthermore, in any of a hundred other contracts, we understand that the renters in those documents are the same type of person or concept in the context of the document, but not the same actual person. These are surprisingly difficult concepts for machine learning and natural language understanding systems to grasp and apply. Yet if they could be mastered, an enormous amount of useful information could be extracted from the millions of documents squirreled away around the world.

What’s up, .docx?

Docugami founder Jean Paoli says they’ve cracked the problem wide open, and while it’s a major claim, he’s one of few people who could credibly make it. Paoli was a major figure at Microsoft for decades, and among other things helped create the XML format — you know all those files that end in x, like .docx and .xlsx? Paoli is at least partly to thank for them.

“Data and documents aren’t the same thing,” he told me. “There’s a thing you understand, called documents, and there’s something that computers understand, called data. Why are they not the same thing? So my first job [at Microsoft] was to create a format that can represent documents as data. I created XML with friends in the industry, and Bill accepted it.” (Yes, that Bill.)

The formats became ubiquitous, yet 20 years later the same problem persists, having grown in scale with the digitization of industry after industry. But for Paoli the solution is the same. At the core of XML was the idea that a document should be structured almost like a webpage: boxes within boxes, each clearly defined by metadata — a hierarchical model more easily understood by computers.

Illustration showing a document corresponding to pieces of another document.

Image Credits: Docugami

“A few years ago I drank the AI kool-aid, got the idea to transform documents into data. I needed an algorithm that navigates the hierarchical model, and they told me that the algorithm you want does not exist,” he explained. “The XML model, where every piece is inside another, and each has a different name to represent the data it contains — that has not been married to the AI model we have today. That’s just a fact. I hoped the AI people would go and jump on it, but it didn’t happen.” (“I was busy doing something else,” he added, to excuse himself.)

The lack of compatibility with this new model of computing shouldn’t come as a surprise — every emerging technology carries with it certain assumptions and limitations, and AI has focused on a few other, equally crucial areas like speech understanding and computer vision. The approach taken there doesn’t match the needs of systematically understanding a document.

“Many people think that documents are like cats. You train the AI to look for their eyes, for their tails… documents are not like cats,” he said.

It sounds obvious, but it’s a real limitation: advanced AI methods like segmentation, scene understanding, multimodal context, and such are all a sort of hyper-advanced cat detection that has moved beyond cats to detect dogs, car types, facial expressions, locations, etc. Documents are too different from one another, or in other ways too similar, for these approaches to do much more than roughly categorize them.

And as for language understanding, it’s good in some ways but not in the ways Paoli needed. “They’re working sort of at the English language level,” he said. “They look at the text but they disconnect it from the document where they found it. I love NLP people, half my team is NLP people — but NLP people don’t think about business processes. You need to mix them with XML people, people who understand computer vision, then you start looking at the document at a different level.”

Docugami in action

Illustration showing a person interacting with a digital document.

Image Credits: Docugami

Paoli’s goal couldn’t be reached by adapting existing tools (beyond mature primitives like optical character recognition), so he assembled his own private AI lab, where a multi-disciplinary team has been tinkering away for about two years.

“We did core science, self-funded, in stealth mode, and we sent a bunch of patents to the patent office,” he said. “Then we went to see the VCs, and Signalfire basically volunteered to lead the seed round at $10 million.”

Coverage of the round didn’t really get into the actual experience of using Docugami, but Paoli walked me through the platform with some live documents. I wasn’t given access myself and the company wouldn’t provide screenshots or video, saying it is still working on the integrations and UI, so you’ll have to use your imagination… but if you picture pretty much any enterprise SaaS service, you’re 90 percent of the way there.

As the user, you upload any number of documents to Docugami, from a couple dozen to hundreds or thousands. These enter a machine understanding workflow that parses the documents, whether they’re scanned PDFs, Word files, or something else, into an XML-esque hierarchical organization unique to the contents.

“Say you’ve got 500 documents, we try to categorize it in document sets, these 30 look the same, those 20 look the same, those 5 together. We group them with a mix of hints coming from how the document looked, what it’s talking about, what we think people are using it for, etc,” said Paoli. Other services might be able to tell the difference between a lease and an NDA, but documents are too diverse to slot into pre-trained ideas of categories and expect it to work out. Every set of documents is potentially unique, and so Docugami trains itself anew every time, even for a set of one. “Once we group them, we understand the overall structure and hierarchy of that particular set of documents, because that’s how documents become useful: together.”

Illustration showing a document being turned into a report and a spreadsheet.

Image Credits: Docugami

That doesn’t just mean it picks up on header text and creates an index, or lets you search for words. The data that is in the document, for example who is paying whom, how much and when, and under what conditions, all that becomes structured and editable within the context of similar documents. (It asks for a little input to double check what it has deduced.)

It can be a little hard to picture, but now just imagine that you want to put together a report on your company’s active loans. All you need to do is highlight the information that’s important to you in an example document — literally, you just click “Jane Roe” and “$20,000” and “5 years” anywhere they occur — and then select the other documents you want to pull corresponding information from. A few seconds later you have an ordered spreadsheet with names, amounts, dates, anything you wanted out of that set of documents.

All this data is meant to be portable too, of course — there are integrations planned with various other common pipes and services in business, allowing for automatic reports, alerts if certain conditions are reached, automated creation of templates and standard documents (no more keeping an old one around with underscores where the principals go).

Remember, this is all half an hour after you uploaded them in the first place, no labeling or pre-processing or cleaning required. And the AI isn’t working from some preconceived notion or format of what a lease document looks like. It’s learned all it needs to know from the actual docs you uploaded — how they’re structured, where things like names and dates figure relative to one another, and so on. And it works across verticals and uses an interface anyone can figure out a few minutes. Whether you’re in healthcare data entry or construction contract management, the tool should make sense.

The web interface where you ingest and create new documents is one of the main tools, while the other lives inside Word. There Docugami acts as a sort of assistant that’s fully aware of every other document of whatever type you’re in, so you can create new ones, fill in standard information, comply with regulations, and so on.

Okay, so processing legal documents isn’t exactly the most exciting application of machine learning in the world. But I wouldn’t be writing this (at all, let alone at this length) if I didn’t think this was a big deal. This sort of deep understanding of document types can be found here and there among established industries with standard document types (such as police or medical reports), but have fun waiting until someone trains a bespoke model for your kayak rental service. But small businesses have just as much value locked up in documents as large enterprises — and they can’t afford to hire a team of data scientists. And even the big organizations can’t do it all manually.

NASA’s treasure trove

Image Credits: NASA

The problem is extremely difficult, yet to humans seems almost trivial. You or I could glance through 20 similar documents and a list of names and amounts easily, perhaps even in less time than it takes for Docugami to crawl them and train itself.

But AI, after all, is meant to imitate and excel human capacity, and it’s one thing for an account manager to do monthly reports on 20 contracts — quite another to do a daily report on a thousand. Yet Docugami accomplishes the latter and former equally easily — which is where it fits into both the enterprise system, where scaling this kind of operation is crucial, and to NASA, which is buried under a backlog of documentation from which it hopes to glean clean data and insights.

If there’s one thing NASA’s got a lot of, it’s documents. Its reasonably well maintained archives go back to its founding, and many important ones are available by various means — I’ve spent many a pleasant hour perusing its cache of historical documents.

But NASA isn’t looking for new insights into Apollo 11. Through its many past and present programs, solicitations, grant programs, budgets, and of course engineering projects, it generates a huge amount of documents — being, after all, very much a part of the federal bureaucracy. And as with any large organization with its paperwork spread over decades, NASA’s document stash represents untapped potential.

Expert opinions, research precursors, engineering solutions, and a dozen more categories of important information are sitting in files searchable perhaps by basic word matching but otherwise unstructured. Wouldn’t it be nice for someone at JPL to get it in their head to look at the evolution of nozzle design, and within a few minutes have a complete and current list of documents on that topic, organized by type, date, author, and status? What about the patent advisor who needs to provide a NIAC grant recipient information on prior art — shouldn’t they be able to pull those old patents and applications up with more specificity than any with a given keyword?

The NASA SBIR grant, awarded last summer, isn’t for any specific work, like collecting all the documents of such and such a type from Johnson Space Center or something. It’s an exploratory or investigative agreement, as many of these grants are, and Docugami is working with NASA scientists on the best ways to apply the technology to their archives. (One of the best applications may be to the SBIR and other small business funding programs themselves.)

Another SBIR grant with the NSF differs in that, while at NASA the team is looking into better organizing tons of disparate types of documents with some overlapping information, at NSF they’re aiming to better identify “small data.” “We are looking at the tiny things, the tiny details,” said Paoli. “For instance, if you have a name, is it the lender or the borrower? The doctor or the patient name? When you read a patient record, penicillin is mentioned, is it prescribed or prohibited? If there’s a section called allergies and another called prescriptions, we can make that connection.”

“Maybe it’s because I’m French”

When I pointed out the rather small budgets involved with SBIR grants and how his company couldn’t possibly survive on these, he laughed.

“Oh, we’re not running on grants! This isn’t our business. For me, this is a way to work with scientists, with the best labs in the world,” he said, while noting many more grant projects were in the offing. “Science for me is a fuel. The business model is very simple – a service that you subscribe to, like Docusign or Dropbox.”

The company is only just now beginning its real business operations, having made a few connections with integration partners and testers. But over the next year it will expand its private beta and eventually open it up — though there’s no timeline on that just yet.

“We’re very young. A year ago we were like five, six people, now we went and got this $10M seed round and boom,” said Paoli. But he’s certain that this is a business that will be not just lucrative but will represent an important change in how companies work.

“People love documents. Maybe it’s because I’m French,” he said, “but I think text and books and writing are critical — that’s just how humans work. We really think people can help machines think better, and machines can help people think better.”

NLPCloud.io helps devs add language processing smarts to their apps

By Natasha Lomas

While visual ‘no code‘ tools are helping businesses get more out of computing without the need for armies of in-house techies to configure software on behalf of other staff, access to the most powerful tech tools — at the ‘deep tech’ AI coal face — still requires some expert help (and/or costly in-house expertise).

This is where bootstrapping French startup, NLPCloud.io, is plying a trade in MLOps/AIOps — or ‘compute platform as a service’ (being as it runs the queries on its own servers) — with a focus on natural language processing (NLP), as its name suggests.

Developments in artificial intelligence have, in recent years, led to impressive advances in the field of NLP — a technology that can help businesses scale their capacity to intelligently grapple with all sorts of communications by automating tasks like Named Entity Recognition, sentiment-analysis, text classification, summarization, question answering, and Part-Of-Speech tagging, freeing up (human) staff to focus on more complex/nuanced work. (Although it’s worth emphasizing that the bulk of NLP research has focused on the English language — meaning that’s where this tech is most mature; so associated AI advances are not universally distributed.)

Production ready (pre-trained) NLP models for English are readily available ‘out of the box’. There are also dedicated open source frameworks offering help with training models. But businesses wanting to tap into NLP still need to have the DevOps resource and chops to implement NLP models.

NLPCloud.io is catering to businesses that don’t feel up to the implementation challenge themselves — offering “production-ready NLP API” with the promise of “no DevOps required”.

Its API is based on Hugging Face and spaCy open-source models. Customers can either choose to use ready-to-use pre-trained models (it selects the “best” open source models; it does not build its own); or they can upload custom models developed internally by their own data scientists — which it says is a point of differentiation vs SaaS services such as Google Natural Language (which uses Google’s ML models) or Amazon Comprehend and Monkey Learn.

NLPCloud.io says it wants to democratize NLP by helping developers and data scientists deliver these projects “in no time and at a fair price”. (It has a tiered pricing model based on requests per minute, which starts at $39pm and ranges up to $1,199pm, at the enterprise end, for one custom model running on a GPU. It does also offer a free tier so users can test models at low request velocity without incurring a charge.)

“The idea came from the fact that, as a software engineer, I saw many AI projects fail because of the deployment to production phase,” says sole founder and CTO Julien Salinas. “Companies often focus on building accurate and fast AI models but today more and more excellent open-source models are available and are doing an excellent job… so the toughest challenge now is being able to efficiently use these models in production. It takes AI skills, DevOps skills, programming skill… which is why it’s a challenge for so many companies, and which is why I decided to launch NLPCloud.io.”

The platform launched in January 2021 and now has around 500 users, including 30 who are paying for the service. While the startup, which is based in Grenoble, in the French Alps, is a team of three for now, plus a couple of independent contractors. (Salinas says he plans to hire five people by the end of the year.)

“Most of our users are tech startups but we also start having a couple of bigger companies,” he tells TechCrunch. “The biggest demand I’m seeing is both from software engineers and data scientists. Sometimes it’s from teams who have data science skills but don’t have DevOps skills (or don’t want to spend time on this). Sometimes it’s from tech teams who want to leverage NLP out-of-the-box without hiring a whole data science team.”

“We have very diverse customers, from solo startup founders to bigger companies like BBVA, Mintel, Senuto… in all sorts of sectors (banking, public relations, market research),” he adds.

Use cases of its customers include lead generation from unstructured text (such as web pages), via named entities extraction; and sorting support tickets based on urgency by conducting sentiment analysis.

Content marketers are also using its platform for headline generation (via summarization). While text classification capabilities are being used for economic intelligence and financial data extraction, per Salinas.

He says his own experience as a CTO and software engineer working on NLP projects at a number of tech companies led him to spot an opportunity in the challenge of AI implementation.

“I realized that it was quite easy to build acceptable NLP models thanks to great open-source frameworks like spaCy and Hugging Face Transformers but then I found it quite hard to use these models in production,” he explains. “It takes programming skills in order to develop an API, strong DevOps skills in order to build a robust and fast infrastructure to serve NLP models (AI models in general consume a lot of resources), and also data science skills of course.

“I tried to look for ready-to-use cloud solutions in order to save weeks of work but I couldn’t find anything satisfactory. My intuition was that such a platform would help tech teams save a lot of time, sometimes months of work for the teams who don’t have strong DevOps profiles.”

“NLP has been around for decades but until recently it took whole teams of data scientists to build acceptable NLP models. For a couple of years, we’ve made amazing progress in terms of accuracy and speed of the NLP models. More and more experts who have been working in the NLP field for decades agree that NLP is becoming a ‘commodity’,” he goes on. “Frameworks like spaCy make it extremely simple for developers to leverage NLP models without having advanced data science knowledge. And Hugging Face’s open-source repository for NLP models is also a great step in this direction.

“But having these models run in production is still hard, and maybe even harder than before as these brand new models are very demanding in terms of resources.”

The models NLPCloud.io offers are picked for performance — where “best” means it has “the best compromise between accuracy and speed”. Salinas also says they are paying mind to context, given NLP can be used for diverse user cases — hence proposing number of models so as to be able to adapt to a given use.

“Initially we started with models dedicated to entities extraction only but most of our first customers also asked for other use cases too, so we started adding other models,” he notes, adding that they will continue to add more models from the two chosen frameworks — “in order to cover more use cases, and more languages”.

SpaCy and Hugging Face, meanwhile, were chosen to be the source for the models offered via its API based on their track record as companies, the NLP libraries they offer and their focus on production-ready framework — with the combination allowing NLPCloud.io to offer a selection of models that are fast and accurate, working within the bounds of respective trade-offs, according to Salinas.

“SpaCy is developed by a solid company in Germany called Explosion.ai. This library has become one of the most used NLP libraries among companies who want to leverage NLP in production ‘for real’ (as opposed to academic research only). The reason is that it is very fast, has great accuracy in most scenarios, and is an opinionated” framework which makes it very simple to use by non-data scientists (the tradeoff is that it gives less customization possibilities),” he says.

Hugging Face is an even more solid company that recently raised $40M for a good reason: They created a disruptive NLP library called ‘transformers’ that improves a lot the accuracy of NLP models (the tradeoff is that it is very resource intensive though). It gives the opportunity to cover more use cases like sentiment analysis, classification, summarization… In addition to that, they created an open-source repository where it is easy to select the best model you need for your use case.”

While AI is advancing at a clip within certain tracks — such as NLP for English — there are still caveats and potential pitfalls attached to automating language processing and analysis, with the risk of getting stuff wrong or worse. AI models trained on human-generated data have, for example, been shown reflecting embedded biases and prejudices of the people who produced the underlying data.

Salinas agrees NLP can sometimes face “concerning bias issues”, such as racism and misogyny. But he expresses confidence in the models they’ve selected.

“Most of the time it seems [bias in NLP] is due to the underlying data used to trained the models. It shows we should be more careful about the origin of this data,” he says. “In my opinion the best solution in order to mitigate this is that the community of NLP users should actively report something inappropriate when using a specific model so that this model can be paused and fixed.”

“Even if we doubt that such a bias exists in the models we’re proposing, we do encourage our users to report such problems to us so we can take measures,” he adds.

 

Discover how Duolingo started with CEO Luis von Ahn at Disrupt 2021

By Natasha Mascarenhas

Before Luis von Ahn co-founded Duolingo, a gamified language-learning app used by hundreds of millions around the world, he was fixated on squiggly letters. The entrepreneur was a co-inventor of CAPTCHA and reCAPTCHA, or those security prompts you get while browsing the web to verify if you are a human or if you are a robot.

And while von Ahn often jokes that his early inventions were considered annoying (it causes friction when consumers have to decipher letters before logging into their email) reCAPTCHA was impressive enough that Google scooped it up. Since then, von Ahn has moved on to creating another iconic company, this time, one that consumers are happy to see pop up on their screens: Duolingo.

Von Ahn is joining us at TechCrunch Disrupt 2021 this September 21-23 to talk about the making of a gamified edtech unicorn. The pre-IPO company started as a grad school project, and over the years has become a behemoth enjoyed by more than 500 million users.

We’ll get into how von Ahn leveraged crowdsourced translation to grow the app, its roller coaster route to monetization and, of course, the iconic — and often sassy — green owl, Duo. We’ll also discuss the broader edtech market for language learning, how the pandemic impacted business and why Duolingo sees opportunity in disrupting not just language, but the tests associated with it, as well.

While part of Duolingo fits into the edtech category, some see the startup as it currently stands as a consumer subscription product with a learning hook. Von Ahn can clear the air on what Duolingo is truly solving for — and what’s ahead for the business.

Von Ahn first presented Duolingo on the Disrupt stage nine years ago, with a website and goal to teach 100 million people a new language. Now, nearly a decade later, he’ll be coming back to explain what happened next. He doesn’t hold back — so you don’t want to miss this.

Disrupt 2021 runs September 21 -23 and will be 100% virtual this year. Get your front-row seat to see von Ahn and many, many more for less than $100! Secure your seat now.

❌