FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Yesterday — August 5th 2020Your RSS feeds

Krisp snags $5M A round as demand grows for its voice-isolating algorithm

By Devin Coldewey

Krisp’s smart noise suppression tech, which silences ambient sounds and isolates your voice for calls, arrived just in time. The company got out in front of the global shift to virtual presence, turning early niche traction has into real customers and attracting a shiny new $5 million series A funding round to expand and diversify its timely offering.

We first met Krisp back in 2018 when it emerged from UC Berkeley’s Skydeck accelerator. The company was an early one in the big surge of AI startups, but with a straightforward use case and obviously effective tech it was hard to be skeptical about.

Krisp applies a machine learning system to audio in real time that has been trained on what is and isn’t the human voice. What isn’t a voice gets carefully removed even during speech, and what remains sounds clearer. That’s pretty much it! There’s very little latency (15 milliseconds is the claim) and a modest computational overhead, meaning it can work on practically any device, especially ones with AI acceleration units like most modern smartphones.

The company began by offering its standalone software for free, with paid tier that removed time limits. It also shipped integrated into popular social chat app Discord. But the real business is, unsurprisingly, in enterprise.

“Early on our revenue was all pro, but in December we started onboarding enterprises. COVID has really accelerated that plan,” explained Davit Baghdasaryan, co-founder and CEO of Krisp. “In March, our biggest customer was a large tech company with 2,000 employees — and they bought 2,000 licenses, because everyone is remote. Gradually enterprise is taking over, because we’re signing up banks, call centers and so on. But we think Krisp will still be consumer-first, because everyone needs that, right?”

Now even more large companies have signed on, including one call center with some 40,000 employees. Baghdasaryan says the company went from 0 to 600 paying enterprises, and $0 to $4M annual recurring revenue in a single year, which probably makes the investment — by Storm Ventures, Sierra Ventures, TechNexus and Hive Ventures — look like a pretty safe one.

It’s a big win for the Krisp team, which is split between the U.S. and Armenia, where the company was founded, and a validation of a global approach to staffing — world-class talent isn’t just to be found in California, New York, Berlin and other tech centers, but in smaller countries that don’t have the benefit of local hype and investment infrastructure.

Funding is another story, of course, but having raised money the company is now working to expand its products and team. Krisp’s next move is essentially to monitor and present the metadata of conversation.

“The next iteration will tell you not just about noise, but give you real time feedback on how you are performing as a speaker,” Baghdasaryan explained. Not in the toastmasters sense, exactly, but haven’t you ever wondered about how much you actually spoke during some call, or whether you interrupted or were interrupted by others, and so on?

“Speaking is a skill that people can improve. Think Grammar.ly for voice and video,” Baghdasaryan ventured. “It’s going to be subtle about how it gives that feedback to you. When someone is speaking they may not necessarily want to see that. But over time we’ll analyze what you say, give you hints about vocabulary, how to improve your speaking abilities.”

Since architecturally Krisp is privy to all audio going in and out, it can fairly easily collect this data. But don’t worry — like the company’s other products, this will be entirely private and on-device. No cloud required.

“We’re very opinionated here: Ours is a company that never sends data to its servers,” said Baghdasaryan. “We’re never exposed to it. We take extra steps to create and optimize our tech so the audio never leaves the device.”

That should be reassuring for privacy wonks who are suspicious of sending all their conversations through a third party to  be analyzed. But after all, the type of advice Krisp is considering can be done without really “understanding” what is said, which also limits its scope. It won’t be coaching you into a modern Cicero, but it might help you speak more consistently or let you know when you’re taking up too much time.

For the immediate future, though, Krisp is still focused on improving its noise-suppression software, which you can download for free here.

Google updates G Suite for mobile with dark mode support, Smart Compose for Docs and more

By Frederic Lardinois

Google today announced a major update to its mobile G Suite productivity apps.

Among these updates are the addition of a dark theme for Docs, Sheets and Slides, as well as the addition of Google’s Smart Compose technology to Docs on mobile and the ability to edit Microsoft Office documents without having to covert them. Other updates include a new vertically scrollable slide viewing experience in Slides, link previews and a new user interface for comments and action items. You can now also respond to comment on your documents directly from Gmail.

For the most part, these new features are now available on Android (or will be in the next few weeks) and then coming to iOS later, though Smart Compose is immediately available for both, while link previews are actually making their debut on iOS, with Android coming later.

Most of these additions simply bring existing desktop features to mobile, which has generally been the way Google has been rolling out new G Suite tools.

The new dark theme will surely get some attention, given that it has been a long time coming and that users now essentially expect this in their mobile apps. Google argues that it won’t just be easier on your eyes but that it can also “keep your battery alive longer” (though only phones with an OLED display will really see a difference there).

Image Credits: Google

You’re likely familiar with Smart Compose at this time, which is already available in Gmail and Docs on the web. Like everywhere else, it’ll try to finish your sentence for you, though given that typing is still more of a hassle on mobile, it’s surely a welcome addition for those who regularly have to write or edit documents on the go.

Even if your business is fully betting on G Suite, chances are somebody will still send you an Office document. On the web, G Suite could already handle these documents without any conversion. This same technology is now coming to mobile as well. It’s a handle feature, though I’m mostly surprised this wasn’t available on mobile before.

As for the rest of the new feature, the one worth calling out is the ability to respond to comments directly from Gmail. Last year, Google rolled out dynamic email on the web. I’m not sure I’ve really seen too many of these dynamic emails — which use AMP to bring dynamic content to your inbox — in the wild, but Google is now using this feature for Docs. “Instead of receiving individual email notifications when you’re mentioned in a comment in Docs, Sheets, or Slides, you’ll now see an up-to-date comment thread in Gmail, and you’ll be able to reply or resolve the comment, directly within the message,” the company explains.

 

Sight Diagnostics raises $71M Series D for its blood analyzer

By Frederic Lardinois

Sight Diagnostics, the Israel-based health-tech company behind the FDA-cleared OLO blood analyzer, today announced that it has raised a $71 million Series D round with participation from Koch Disruptive Technologies, Longliv Ventures (which led its Series C round)and crowd-funding platform OurCrowd. With this, the company has now raised a total of $124 million, though the company declined to share its current valuation.

With a founding team that used to work at Mobileye, among other companies, Sight made an early bet on using machine vision to analyze blood samples and provide a full blood count comparable to existing lab tests within minutes. The company received FDA clearance late last year, something that surely helped clear the way for this additional round of funding.

Image Credits: Sight Diagnostics

“Historically, blood tests were done by humans observing blood under a microscope. That was the case for maybe 200 years,” Sight CEO and co-founder Yossi Pollak told me. “About 60 years ago, a new technology called FCM — or flow cytometry — started to be used on large volume of blood from venous samples to do it automatically. In a sense, we are going back to the first approach, we just replaced the human eye behind the microscope with machine vision.”

Pollak noted that the tests generate about 60 gigabytes of information (a lot of that is the images, of course) and that he believes that the complete blood count is only a first step. One of the diseases it is looking to diagnose is COVID-19. To do so, the company has placed devices in hospitals around the world to see if it can gather the data to detect anomalies that may indicate the severity of some of the aspects of the disease.

“We just kind of scratched the surface of the ability of AI to help with with a wish with blood diagnostics,” said Pollak. “Specifically now, there’s so much value around COVID in decentralizing diagnostics and blood tests. Think keeping people — COVID-negative or -positive —  outside of hospitals to reduce the busyness of hospitals and reduce the risk for contamination for cancer patients and a lot of other populations that require constant complete blood counts. I think there’s a lot of potential and a lot of a value that we can bring specifically now to different markets and we are definitely looking into additional applications beyond [complate blood count] and also perfecting our product.”

So far, Sight Diagnostics has applied for 20 patents and eight have been issued so far. And while machine learning is obviously at the core of what the company does — with the models running on the OLO machine and not in the cloud — Pollak also stressed that the team has made breakthroughs around the sample preparation to allow it to automatically prepare the sample for analysis.

Image Credits: Sight Diagnostics

Pollok stressed that the company focused on the U.S. market with this funding round, which makes sense, given that it was still looking for its FDA clearance. He also noted that this marks Koch Disrupt Technologies’ third investment in Israel, with the other two also being healthcare startups.

“KDT’s investment in Sight is a testament to the company’s disruptive technology that we believe will fundamentally change the way blood diagnostic work is done,’ said Chase Koch, President of Koch Disruptive Technologies . “We’re proud to partner with the Sight team, which has done incredible work innovating this technology to transform modern healthcare and provide greater efficiency and safety for patients, healthcare workers, and hospitals worldwide.”

The company now has about 100 employees, mostly in R&D, with offices in London and New York.

Before yesterdayYour RSS feeds

Microsoft, Amazon back a SoCal company making microchips specifically for voice-based apps

By Jonathan Shieber

Microsoft’s venture capital fund, M12 Ventures, has led a slew of strategic corporate investors backing a new chip developer out of Southern California called Syntiant, which makes semiconductors for voice recognition and speech-based applications.

“We started out to build a new type of processor for machine learning, and voice is our first application,” says Syntiant chief executive Kurt Busch. “We decided to build a chip for always-on battery-powered devices.”

Those chips need a different kind of processor than traditional chipsets, says Busch. Traditional compute is about logic, and deep learning is about memory access; traditional microchip designs also don’t perform as well when it comes to parallel processing of information.

Syntiant claims that its chips are two orders of magnitude more efficient, thanks to its data flow architecture that was built for deep learning, according to Busch.

It’s that efficiency that attracted investors, including M12, Microsoft Corp.’s venture fund; the Amazon Alexa Fund; Applied Ventures, the investment arm of Applied Materials; Intel Capital; Motorola Solutions Venture Capital; and Robert Bosch Venture Capital.

These investment firms represent some of the technology industry’s top chip makers and software developers, and they’re pooling their resources to support Syntiant’s Irvine, California-based operations.

smart speakers

Image Credits: Bryce Durbin / TechCrunch

“Syntiant aligns perfectly with our mission to support companies that fuel voice technology innovation,” said Paul Bernard, director of the Alexa Fund at Amazon. “Its technology has enormous potential to drive continued adoption of voice services like Alexa, especially in mobile scenarios that require devices to balance low power with continuous, high-accuracy voice recognition. We look forward to working with Syntiant to extend its speech technology to new devices and environments.” 

Syntiant’s first device measures 1.4 by 1.8 millimeters and draws 140 microwatts of power. In some applications, Syntiant’s chips can run for a year on a single coin cell.

“Syntiant’s neural network technology and its memory-centric architecture fits well with Applied Materials’ core expertise in materials engineering as we enable radical leaps in device performance and novel materials-enabled memory technologies,” said Michael Stewart, principal at Applied Ventures, the venture capital arm of Applied Materials, Inc. “Syntiant’s ultra-low-power neural decision processors have the potential to create growth in the chip marketplace and provide an effective solution for today’s demanding voice and video applications.” 

So far, 80 customers are working with Syntiant to integrate the company’s chips into their products. There are a few dozen companies in the design stage and the company has already notched design wins for products ranging from cell phones and smart speakers to remote controls, hearing aids, laptops and monitors. Already the company has shipped its first million units.  

“We expect to scale that by 10x by the end of this year,” says Busch. 

Syntiant’s chipsets are designed specifically to handle wakes and commands, which means that users can add voice recognition features and commands unique to their particular voice, Busch says.

Initially backed by venture firms including Atlantic Bridge, Miramar and Alpha Edison, Syntiant raised its first round of funding in October 2017. The company has raised a total of $65 million to date, according to Busch.

“Syntiant’s architecture is well-suited for the computational patterns and inherent parallelism of deep neural networks,” said Samir Kumar, an investor with M12 and new director on the Syntiant board. “We see great potential in its ability to enable breakthroughs in power performance for AI processing in IoT [Internet of things].” 

 

Autonomous vehicle reporting data is driving AV innovation right off the road

By Walter Thompson
Grace Strickland Contributor
Grace Strickland is an attorney with more than six years of experience representing technology clients in cutting-edge industries, including autonomous transportation.
John McNelis Contributor
John McNelis is an intellectual property partner and leader of the autonomous transportation and shared mobility practice at Fenwick & West; he also chairs the California Technology Council’s Autonomous Transportation Initiative.

At the end of every calendar year, the complaints from autonomous vehicle companies start piling up. This annual tradition is the result of a requirement by the California Department of Motor Vehicles that AV companies deliver “disengagement reports” by January 1 of each year showing the number of times an AV operator had to disengage the vehicle’s autonomous driving function while testing the vehicle.

However, all disengagement reports have one thing in common: their usefulness is ubiquitously criticized by those who have to submit them. The CEO and founder of a San Francisco-based self-driving car company publicly stated that disengagement reporting is “woefully inadequate … to give a meaningful signal about whether an AV is ready for commercial deployment.” The CEO of a self-driving technology startup called the metrics “misguided.” Waymo stated in a tweet that the metric “does not provide relevant insights” into its self-driving technology or “distinguish its performance from others in the self-driving space.”

1/7 We appreciate what the California DMV was trying to do when creating this requirement, but the disengagement metric does not provide relevant insights into the capabilities of the Waymo Driver or distinguish its performance from others in the self-driving space.

— Waymo (@Waymo) February 26, 2020

Why do AV companies object so strongly to California’s disengagement reports? They argue the metric is misleading based on lack of context due to the AV companies’ varied testing strategies. I would argue that a lack of guidance regarding the language used to describe the disengagements also makes the data misleading. Furthermore, the metric incentivizes testing in less difficult circumstances and favors real-world testing over more insightful virtual testing.

Understanding California reporting metrics

To test an autonomous vehicle on public roads in California, an AV company must obtain an AV Testing Permit. As of June 22, 2020, there were 66 Autonomous Vehicle Testing Permit holders in California and 36 of those companies reported autonomous vehicle testing in California in 2019. Only five of those companies have permits to transport passengers.

To operate on California public roads, each permitted company must report any collision that results in property damage, bodily injury, or death within 10 days of the incident.

There have been 24 autonomous vehicle collision reports in 2020 thus far. However, though the majority of those incidents occurred in autonomous mode, accidents were almost exclusively the result of the autonomous vehicle being rear-ended. In California, rear-end collisions are almost always deemed the fault of the rear-ending driver.

The usefulness of collision data is evident — consumers and regulators are most concerned with the safety of autonomous vehicles for pedestrians and passengers. If an AV company reports even one accident resulting in substantial damage to the vehicle or harm to a pedestrian or passenger while the vehicle operates in autonomous mode, the implications and repercussions for the company (and potentially the entire AV industry) are substantial.

However, the usefulness of disengagement reporting data is much more questionable. The California DMV requires AV operators to report the number and details of disengagements while testing on California public roads by January 1 of each year. The DMV defines this as “how often their vehicles disengaged from autonomous mode during tests (whether because of technical failure or situations requiring the test driver/operator to take manual control of the vehicle to operate safely).”

Operators must also track how often their vehicles disengaged from autonomous mode, and whether that disengagement was the result of software malfunction, human error, or at the option of the vehicle operator.

AV companies have kept a tight lid on measurable metrics, often only sharing limited footage of demonstrations performed under controlled settings and very little data, if any. Some companies have shared the occasional “annual safety report,” which reads more like a promotional deck than a source of data on AV performance. Furthermore, there are almost no reporting requirements for companies doing public testing in any other state. California’s disengagement reports are the exception.

This AV information desert means that disengagement reporting in California has often been treated as our only source of information on AVs. The public is forced to judge AV readiness and relative performance based on this disengagement data, which is incomplete at best and misleading at worst.

Disengagement reporting data offers no context

Most AV companies claim that disengagement reporting data is a poor metric for judging advancement in the AV industry due to a lack of context for the numbers: knowing where those miles were driven and the purpose of those trips is essential to understanding the data in disengagement reports.

Some in the AV industry have complained that miles driven in sparsely populated areas with arid climates and few intersections are miles dissimilar from miles driven in a city like San Francisco, Pittsburgh, or Atlanta. As a result, the number of disengagements reported by companies that test in the former versus the latter geography are incomparable.

It’s also important to understand that disengagement reporting requirements influence AV companies’ decisions on where and how to test. A test that requires substantial disengagements, even while safe, would be discouraged, as it would make the company look less ready for commercial deployment than its competitors. In reality, such testing may result in the most commercially ready vehicle. Indeed, some in the AV industry have accused competitors of manipulating disengagement reporting metrics by easing the difficulty of miles driven over time to look like real progress.

Furthermore, while data can look particularly good when manipulated by easy drives and clear roads, data can look particularly bad when it’s being used strategically to improve AV software.

Let’s consider an example provided by Jack Stewart, a reporter for NPR’s Marketplace covering transportation:

“Say a company rolls out a brand-new build of their software, and they’re testing that in California because it’s near their headquarters. That software could be extra buggy at the beginning, and you could see a bunch of disengagements, but that same company could be running a commercial service somewhere like Arizona, where they don’t have to collect these reports.

That service could be running super smoothly. You don’t really get a picture of a company’s overall performance just by looking at this one really tight little metric. It was a nice idea of California some years ago to start collecting some information, but it’s not really doing what it was originally intended to do nowadays.”

Disengagement reports lack prescriptive language

The disengagement reports are also misleading due to a lack of guidance and uniformity in the language used to describe the disengagements. For example, while AV companies used a variety of language, “perception discrepancies” was the most common term used to describe the reason for a disengagement — however, it’s not clear that the term “perception discrepancies” has a set meaning.

Several operators used the phrase “perception discrepancy” to describe a failure to detect an object correctly. Valeo North America described a similar error as “false detection of object.” Toyota Research Institute almost exclusively described their disengagements vaguely as “Safety Driver proactive disengagement,” the meaning of which is “any kind of disengagement.” Whereas, Pony.ai described each instance of disengagement with particularity.

Many other operators reported disengagements that were “planned testing disengagements” or that were described with such insufficient particularity as to be virtually meaningless.

For example, “planned disengagements” could mean the testing of intentionally created malfunctions, or it could simply mean the software is so nascent and unsophisticated that the company expected the disengagement. Similarly, “perception discrepancy” could mean anything from precautionary disengagements to disengagements due to extremely hazardous software malfunctions. “Perception discrepancy,” “planned disengagement” or any number of other vague descriptions of disengagements make comparisons across AV operators virtually impossible.

So, for example, while it appears that a San Francisco-based AV company’s disengagements were exclusively precautionary, the lack of guidance on how to describe disengagements and the many vague descriptions provided by AV companies have cast a shadow over disengagement descriptions, calling them all into question.

Regulations discourage virtual testing

Today, the software of AV companies is the real product. The hardware and physical components — lidar, sensors, etc. — of AV vehicles have become so uniform, they’re practically off-the-shelf. The real component that is being tested is software. It’s well known that software bugs are best found by running the software as often as possible; road testing simply can’t reach the sheer numbers necessary to find all the bugs. What can reach those numbers is virtual testing.

However, the regulations discourage virtual testing as the lower reported road miles would seem to imply that a company is not road-ready.

Jack Stewart of NPR’s Marketplace expressed a similar point of view:

“There are things that can be relatively bought off the shelf and, more so these days, there are just a few companies that you can go to and pick up the hardware that you need. It’s the software, and it’s how many miles that software has driven both in simulation and on the real roads without any incident.”

So, where can we find the real data we need to compare AV companies? One company runs over 30,000 instances daily through its end-to-end, three-dimensional simulation environment. Another company runs millions of off-road tests a day through its internal simulation tool, running driving models that include scenarios that it can’t test on roads involving pedestrians, lane merging, and parked cars. Waymo drives 20 million miles a day in its Carcraft simulation platform — the equivalent of over 100 years of real-world driving on public roads.

One CEO estimated that a single virtual mile can be just as insightful as 1,000 miles collected on the open road.

Jonathan Karmel, Waymo’s product lead for simulation and automation, similarly explained that Carcraft provides “the most interesting miles and useful information.”

Where we go from here

Clearly there are issues with disengagement reports — both in relying on the data therein and in the negative incentives they create for AV companies. However, there are voluntary steps that the AV industry can take to combat some of these issues:

  1. Prioritize and invest in virtual testing. Developing and operating a robust system of virtual testing may present a high expense to AV companies, but it also presents the opportunity to dramatically shorten the pathway to commercial deployment through the ability to test more complex, higher risk, and higher number scenarios.
  2. Share data from virtual testing. Voluntary disclosure of virtual testing data will reduce reliance on disengagement reports by the public. Commercial readiness will be pointless unless AV companies have provided the public with reliable data on AV readiness for a sustained period.
  3. Seek the greatest value from on-road miles. AV companies should continue using on-road testing in California, but they should use those miles to fill in the gaps from virtual testing. They should seek the greatest value possible out of those slower miles, accept the higher percentage of disengagements they will be required to report, and when reporting on those miles, describe their context in particularity.

With these steps, AV companies can lessen the pain of California’s disengagement reporting data and advance more quickly to an AV-ready future.

UK commits to redesign visa streaming algorithm after challenge to ‘racist’ tool

By Natasha Lomas

The U.K. government is suspending the use of an algorithm used to stream visa applications after concerns were raised the technology bakes in unconscious bias and racism.

The tool had been the target of a legal challenge. The Joint Council for the Welfare of Immigrants (JCWI) and campaigning law firm Foxglove had asked a court to declare the visa application streaming algorithm unlawful and order a halt to its use, pending a judicial review.

The legal action had not run its full course but appears to have forced the Home Office’s hand as it has committed to a redesign of the system.

A Home Office spokesperson confirmed to us that from August 7 the algorithm’s use will be suspended, sending us this statement via email: “We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure.”

Although the government has not accepted the allegations of bias, writing in a letter to the law firm: “The fact of the redesign does not mean that the [Secretary of State] accepts the allegations in your claim form [i.e. around unconscious bias and the use of nationality as a criteria in the streaming process].”

The Home Office letter also claims the department had already moved away from use of the streaming tool “in many application types.” But it adds that it will approach the redesign “with an open mind in considering the concerns you have raised.”

The redesign is slated to be completed by the autumn, and the Home Office says an interim process will be put in place in the meanwhile, excluding the use of nationality as a sorting criteria.

HUGE news. From this Friday, the Home Office's racist visa algorithm is no more! 💃🎉 Thanks to our lawsuit (with @JCWI_UK) against this shadowy, computer-driven system for sifting visa applications, the Home Office have agreed to “discontinue the use of the Streaming Tool”.

— Foxglove (@Foxglovelegal) August 4, 2020

The JCWI has claimed a win against what it describes as a “shadowy, computer-driven” people-sifting system — writing on its website: “Today’s win represents the UK’s first successful court challenge to an algorithmic decision system. We had asked the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications, pending a review. The Home Office’s decision effectively concedes the claim.”

The department did not respond to a number of questions we put to it regarding the algorithm and its design processes — including whether or not it sought legal advice ahead of implementing the technology in order to determine whether it complied with the U.K.’s Equality Act.

“We do not accept the allegations Joint Council for the Welfare of Immigrants made in their Judicial Review claim and whilst litigation is still on-going it would not be appropriate for the Department to comment any further,” the Home Office statement added.

The JCWI’s complaint centered on the use, since 2015, of an algorithm with a “traffic-light system” to grade every entry visa application to the U.K.

“The tool, which the Home Office described as a digital ‘streaming tool’, assigns a Red, Amber or Green risk rating to applicants. Once assigned by the algorithm, this rating plays a major role in determining the outcome of the visa application,” it writes, dubbing the technology “racist” and discriminatory by design, given its treatment of certain nationalities.

“The visa algorithm discriminated on the basis of nationality — by design. Applications made by people holding ‘suspect’ nationalities received a higher risk score. Their applications received intensive scrutiny by Home Office officials, were approached with more scepticism, took longer to determine, and were much more likely to be refused.

“We argued this was racial discrimination and breached the Equality Act 2010,” it adds. “The streaming tool was opaque. Aside from admitting the existence of a secret list of suspect nationalities, the Home Office refused to provide meaningful information about the algorithm. It remains unclear what other factors were used to grade applications.”

Since 2012 the Home Office has openly operated an immigration policy known as the “hostile environment” — applying administrative and legislative processes that are intended to make it as hard as possible for people to stay in the U.K.

The policy has led to a number of human rights scandals. (We also covered the impact on the local tech sector by telling the story of one U.K. startup’s visa nightmare last year.) So applying automation atop an already highly problematic policy does look like a formula for being taken to court.

The JCWI’s concern around the streaming tool was exactly that it was being used to automate the racism and discrimination many argue underpin the Home Office’s “hostile environment” policy. In other words, if the policy itself is racist, any algorithm is going to pick up and reflect that.

“The Home Office’s own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates,” said Chai Patel, legal policy director of the JCWI, in a statement. “This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.”

“We’re delighted the Home Office has seen sense and scrapped the streaming tool. Racist feedback loops meant that what should have been a fair migration process was, in practice, just ‘speedy boarding for white people.’ What we need is democracy, not government by algorithm,” added Cori Crider, founder and director of Foxglove. “Before any further systems get rolled out, let’s ask experts and the public whether automation is appropriate at all, and how historic biases can be spotted and dug out at the roots.”

In its letter to Foxglove, the government has committed to undertaking Equality Impact Assessments and Data Protection Impact Assessments for the interim process it will switch to from August 7 — when it writes that it will use “person-centric attributes (such as evidence of previous travel,” to help sift some visa applications, further committing that “nationality will not be used.”

Some types of applications will be removed from the sifting process altogether during this period.

“The intent is that the redesign will be completed as quickly as possible and at the latest by October 30, 2020,” it adds.

Asked for thoughts on what a legally acceptable visa streaming algorithm might look like, internet law expert Lilian Edwards told TechCrunch: “It’s a tough one… I am not enough of an immigration lawyer to know if the original criteria applied re suspect nationalities would have been illegal by judicial review standard anyway even if not implemented in a sorting algorithm. If yes then clearly a next generation algorithm should aspire only to discriminate on legally acceptable grounds.

“The problem as we all know is that machine learning can reconstruct illegal criteria — though there are now well known techniques for evading that.”

The ethical principles need to apply to the immigration policy, not just the visa algorithm. The problem is the racist immigration system dressed up in false computerised objectivity

— Javier Ruiz (@javierruiz) August 4, 2020

“You could say the algorithmic system did us a favour by confronting illegal criteria being used which could have remained buried at individual immigration officer informal level. And indeed one argument for such systems used to be ‘consistency and non-arbitrary’ nature. It’s a tough one,” she added.

Earlier this year the Dutch government was ordered to halt use of an algorithmic risk scoring system for predicting the likelihood social security claimants would commit benefits or tax fraud — after a local court found it breached human rights law.

In another interesting case, a group of U.K. Uber drives are challenging the legality of the gig platform’s algorithmic management of them under Europe’s data protection framework — which bakes in data access rights, including provisions attached to legally significant automated decisions.

Announcing Sight Tech Global, an event on the future of AI and accessibility for people who are blind or visually impaired

By Ned Desmond

Few challenges have excited technologists more than building tools to help people who are blind or visually impaired. It was Silicon Valley legend Ray Kurzweil, for example, who in 1976 launched the first commercially available text-to-speech reading device. He unveiled the $50,000 Kurzweil Reading Machine, a boxy device that covered a tabletop, at a press conference hosted by the National Federation of the Blind

The early work of Kurzweil and many others has rippled across the commerce and technology world in stunning ways. Today’s equivalent of Kurzweil’s machine is Microsoft’s Seeing AI app, which uses AI-based image recognition to “see” and “read” in ways that Kurzweil could only have dreamed of. And it’s free to anyone with a mobile phone. 

Remarkable leaps forward like that are the foundation for Sight Tech Global, a new, virtual event slated for December 2-3, that will bring together many of the world’s top technology and accessibility experts to discuss how rapid advances in AI and related technologies will shape assistive technology and accessibility in the years ahead.

The technologies behind Microsoft’s Seeing AI are on the same evolutionary tree as the ones that enable cars to be autonomous and robots to interact safely with humans. Much of our most advanced technology today stems from that early, challenging mission that top Silicon Valley engineers embraced to teach machines to “see” on behalf of humans.

From the standpoint of people who suffer vision loss, the technology available today is astonishing, far beyond what anyone anticipated even 10 years ago. Purpose-built products like Seeing AI and computer screen readers like JAWS are remarkable tools. At the same time, consumer products, including mobile phones, mapping apps and smart voice assistants, are game changers for everyone, those with sight loss not the least. And yet, that tech bonanza has not come close to breaking down the barriers in the lives of people who still mostly navigate with canes or dogs or sighted assistance, depend on haphazard compliance with accessibility standards to use websites and can feel as isolated as ever in a room full of people. 

A computer can drive a car at 70 MPH without human assistance but there is not yet any comparable device to help a blind person walk down a sidewalk at 3 MPH.

In other words, we live in a world where a computer can drive a car at 70 MPH without human assistance but there is not yet any comparable device to help a blind person walk down a sidewalk at 3 MPH. A social media site can identify billions of people in an instant but a blind person can’t readily identify the person standing in front of them. Today’s powerful technologies, many of them grounded in AI, have yet to be milled into next-generation tools that are truly useful, happily embraced and widely affordable. The work is underway at big tech companies like Apple and Microsoft, at startups, and in university labs, but no one would dispute that the work is as slow as it is difficult. People who are blind or visually impaired live in a world where, as the science fiction author William Gibson once remarked, “The future is already here — it’s just not very evenly distributed.”

That state of affairs is the inspiration for Sight Tech Global. The event will convene the top technologists, human-computer interaction specialists, product designers, researchers, entrepreneurs and advocates to discuss the future of assistive technology as well as accessibility in general. Many of those experts and technologists are blind or visually impaired, and the event programming will stand firmly on the ground that no discussion or new product development is meaningful without the direct involvement of that community. Silicon Valley has great technologies, but does not, on its own, have the answers.

The two days of programming on the virtual main stage will be free and available on a global basis both live and on-demand. There will also be a $25 Pro Pass for those who want to participate in specialized breakout sessions, Q&A with speakers and virtual networking. Registration for the show opens soon; in the meantime, anyone interested may request email updates here

It’s important to note that there are many excellent events every year that focus on accessibility, and we respect their many abiding contributions and steady commitment. Sight Tech Global aims to complement the existing event line-up by focusing on hard questions about advanced technologies and the products and experiences they will drive in the years ahead — assuming they are developed hand-in-hand with their intended audience and with affordability, training and other social factors in mind. 

In many respects, Sight Tech Global is taking a page from TechCrunch’s approach to its AI and robotics events over the past four years, which were in partnership with MIT and UC Berkeley. The concept was to have TechCrunch editors ask top experts in AI and related fields tough questions across the full spectrum of issues around these powerful technologies, from the promise of automation and machine autonomy to the downsides of job elimination and bias in AI-based systems. TechCrunch’s editors will be a part of this show, along with other expert moderators.  

As the founder of Sight Tech Global, I am drawing on my extensive event experience at TechCrunch over eight years to produce this event. Both TechCrunch and its parent company, Verizon Media, are lending a hand in important ways. My own connection to the community is through my wife, Joan Desmond, who is legally blind. 

The proceeds from sponsorships and ticket sales will go to the nonprofit Vista Center for the Blind and Visually Impaired, which has been serving Silicon Valley area for 75 years. The Vista Center owns the Sight Tech Global event and its executive director, Karae Lisle is the event’s chair. We have assembled a highly experienced team of volunteers to program and produce a rich, world-class virtual event on December 2-3.

Sponsors are welcome, and we have opportunities available ranging from branding support to content integration. Please email sponsor@sighttechglobal.com for more information.

Our programming work is under way and we will announce speakers and sessions over the coming weeks. The programming committee includes Jim Fruchterman (Benetech / TechMatters), Larry Goldberg (Verizon Media), Matt King (Facebook) and Professor Roberto Manduchi (UC Santa Cruz). We welcome ideas and can be reached via programming@sighttechglobal.com

For general inquiries, including collaborations on promoting the event, please contact info@sighttechglobal.com.

The essential revenue software stack

By Walter Thompson
Tim Porter Contributor
Tim Porter is a managing director at Madrona Venture Group and invests in the areas of intelligent applications and SaaS, cloud native software, ML and data analytics and security.
Elisa La Cava Contributor
Elisa La Cava is a senior associate at Madrona Venture Group, focused on intelligent applications, cloud-native software and the future of work.

From working with our 90+ portfolio companies and their customers, as well as from frequent conversations with enterprise leaders, we have observed a set of software services emerge and evolve to become best practice for revenue teams. This set of services — call it the “revenue stack” — is used by sales, marketing and growth teams to identify and manage their prospects and revenue.

The evolution of this revenue stack started long before anyone had ever heard the word coronavirus, but now the stakes are even higher as the pandemic has accelerated this evolution into a race. Revenue teams across the country have been forced to change their tactics and tools in the blink of an eye in order to adapt to this new normal — one in which they needed to learn how to sell in not only an all-digital world but also an all-remote one where teams are dispersed more than ever before. The modern “remote-virtual-digital”-enabled revenue team has a new urgency for modern technology that equips them to be just as — and perhaps even more — productive than their pre-coronavirus baseline. We have seen a core combination of solutions emerge as best-in-class to help these virtual teams be most successful. Winners are being made by the directors of revenue operations, VPs of revenue operations, and chief revenue officers (CROs) who are fast adopters of what we like to call the essential revenue software stack.

In this stack, we see four necessary core capabilities, all critically interconnected. The four core capabilities are:

  1. Revenue enablement.
  2. Sales engagement.
  3. Conversational intelligence.
  4. Revenue operations.

These capabilities run on top of three foundational technologies that most growth-oriented companies already use — agreement management, CRM and communications. We will dive into these core capabilities, the emerging leaders in each and provide general guidance on how to get started.

Revenue enablement

Cisco acquires Modcam to make Meraki smart camera portfolio even smarter

By Ron Miller

As the Internet of Things, proliferates, security cameras are getting smarter. Today, these devices have machine learning capability that help the camera automatically identify what it’s looking at — for instance an animal or a human intruder? Today, Cisco announced that it’s acquired Swedish startup Modcam and making it part of its Meraki smart camera portfolio with the goal of incorporating Modcam computer vision technology into its portfolio.

The companies did not reveal the purchase price, but Cisco tells us that the acquisition has closed.

In a blog post announcing the deal, Cisco Meraki’s Chris Stori says Modcam is going to up Meraki’s machine learning game, while giving it some key engineering talent, as well.

“In acquiring Modcam, Cisco is investing in a team of highly talented engineers who bring a wealth of expertise in machine learning, computer vision and cloud-managed cameras. Modcam has developed a solution that enables cameras to become even smarter,” he wrote.

What he means is that today, while Meraki has smart cameras that include motion detection and machine learning capabilities, this is limited to single camera operation. What Modcam brings is the added ability to gather information and apply machine learning across multiple cameras, greatly enhancing the camera’s capabilities.

“With Modcam’s technology, this micro-level information can be stitched together, enabling multiple cameras to provide a macro-level view of the real world,” Stori wrote. In practice, as an example, that could provide a more complete view of space availability for facilities management teams, an especially important scenario as businesses try to find safer ways to open during the pandemic. The other scenario Modcam was selling was giving a more complete picture of what was happening on the factory floor.

All of Modcams employees, which Cisco described only as “a small team” have joined Cisco, and the Modcam technology will be folded into the Meraki product line, and will no longer be offered as a stand-alone product, a Cisco spokesperson told TechCrunch.

Modcam was founded in 2013 and has raised $7.6 million, according to Crunchbase data. Cisco acquired Meraki back in 2012 for $1.2 billion.

What’s This? A Bipartisan Plan for AI and National Security

By Tom Simonite
Republican Will Hurd and Democrat Robin Kelly want more Pentagon spending, a Cold War-style “hotline,” and a curb on chip exports to China.

AI Is All the Rage. So Why Aren’t More Businesses Using It?

By Will Knight
A big study by the US Census Bureau finds that only about 9 percent of firms employ tools like machine learning or voice recognition—for now.

Magnetis raises $11 million for its automated wealth management and brokerage service for Brazil

By Jonathan Shieber

Magnetis, an automated wealth management solution for Brazilian investors, has raised $11 million in a new round of funding as it transforms itself into a full service brokerage for the nation’s investor class.

Investors in the round included Redpoint eventures and Vostok Emerging Finance, the company said.

“We’re quite happy with this vote of confidence from our investors. It only reinforces the credibility of our service and business model, which uses technology for goal-based investment management, without creating a conflict of interest,” said Luciano Tavares, founder and CEO of Magnetis. “The new funding will be used to launch our own brokerage and to develop new functionalities that improve customer experience and provide a complete and curated journey through goal-based investments.”

First launched five years ago, the company has set up 350,000 investment plans and has more than 430 million reals under management, according to a statement from the company.

The company said it planned to hit more than 1 billion reals by the end of 2021.

“Today, the Brazilian market is more sophisticated, with a sharp drop in a dependence on fixed income and a rise in more financial assets, including funds, shares, commodities and fixed-income securities. Defining a personal investment portfolio is a science, not a game or lottery,” said Anderson Thees, founder and managing partner of Redpoint eventures, in a statement. “Magnetis’ great differentiator is its ability to set up a personalized investment plan, with first-rate assets and its use of AI to manage all the variables in a sophisticated way. Magnetis is well-positioned for accelerated growth and our team at Redpoint is excited about guiding them during this new phase of our partnership as the fintech sector continues to boom in Brazil and beyond.”

Fintech in Latin America is a booming investment category, with companies like Nubank skyrocketing to multi-billion dollar valuations, and accounting for 22 percent of all Latin American fintech startups.

As the company closes on the new financing, it’s also launching a brokerage, which will enable the company to do more for its customers, according to Tavares. It may also allow the company to keep more money for itself since it doesn’t have to work with outside parties to execute trades.

“Our model for digital assets management and wealth creation is much more complete and sophisticated. The vision is to be a financial guide for our clients; making their investment experience simpler,” Tavares said in a statement. “A total integration with the broker makes the client’s journey simpler, more consolidated and complete.”

Tavares and Magnetis is also making a commitment to transparency around fees.

“We do not receive commissions on the products we recommend to customers,” said Tavares, in a statement. “The asset selection process is done in a transparent and automated way, and customers pay us an annual consulting fee based only on the amount they invest, and not according to the recommended investments. The end result is the selection of high quality products that are more aligned with the clients’ objectives.”

Talking virtual events and Disrupt with Hopin founder Johnny Boufarhat

By Joey Hinson

Register now to attend the event on 8/6 at 10 AM PT. Registered attendees will have the opportunity to ask questions via Slido.

Next in our series of talks with virtual event masterminds, we’ll be meeting with Johnny Boufarhat, founder of virtual events platform, Hopin, about the virtual venue we’re developing together for Disrupt

Hopin was founded in 2007 with the aim to help organizers recreate the in-person event experience virtually. So far, it’s hosted events with partners including the UN and Dell. In June of this year, Hopin announced that it had raised a $40 million Series A led by IVP. 

In our discussion with Johnny, we will cover topics including: how COVID-19 has accelerated the demand for virtual events,  his perspective on virtual venues, the Disrupt virtual venue, the attendee experience, the partner experience, how sponsors can leverage the event, and what the future of events might look like. If you’re interested in attending, becoming a sponsor, or learning more about Disrupt, this is for you.

Register now!

Next up:

7/30 at 10 AM PT: Grip CEO, Co-founder Tim Groot on Virtual Event networking

In addition to our talk with Johnny next week, we’re hosting a session today at 10 AM PT with Tim Groot, the founder of the AI driven event networking solution, Grip, that we will use to power Disrupt (register here).

8/13 at 10 AM PT: Slido CEO, co-founder Peter Komorník

Join us for a conversation with the founder of Slido, the company we will use to power engagement at Disrupt. More on this soon!

Microsoft’s new Flight Simulator is a beautiful work in progress

By Frederic Lardinois

For the last two weeks, I’ve been flying around the world in a preview of Microsoft’s new Flight Simulator. Without a doubt, it’s the most beautiful flight simulator yet, and it’ll make you want to fly low and slow over your favorite cities because — if you pick the right one — every street and house will be there in more detail than you’ve ever seen in a game. Weather effects, day and night cycles, plane models — it all looks amazing. You can’t start it up and not fawn over the graphics.

But the new Flight Simulator is also still very much a work in progress, too, even just a few weeks before the scheduled launch date on August 18. It’s officially still in beta, so there’s still time to fix at least some of the issues I list below. Because Microsoft and Asobo Studios, which was responsible for the development of the simulator, are using Microsoft’s AI tech in Azure to automatically generate much of the scenery based on Microsoft’s Bing Maps data, you’ll find a lot of weirdness in the world. There are taxiway lights in the middle of runways, giant hangars and crew buses at small private fields, cars randomly driving across airports, giant trees growing everywhere (while palms often look like giant sticks), bridges that are either under water or big blocks of black over a river — and there are a lot of sunken boats, too.

When the system works well, it’s absolutely amazing. Cities like Barcelona, Berlin, San Francisco, Seattle, New York and others that are rendered using Microsoft’s photogrammetry method look great — including and maybe especially at night.

Image Credits: Microsoft

The rendering engine on my i7-9700K with an Nvidia 2070 Super graphics card never let the frame rate drop under 30 frames per second (which is perfectly fine for a flight simulator) and usually hovered well over 40, all with the graphics setting pushed up to the maximum and with a 2K resolution.

When things don’t work, though, the effect is stark because it’s so obvious. Some cities, like Las Vegas, look like they suffered some kind of catastrophe, as if the city was abandoned and nature took over (which in the case of the Vegas Strip doesn’t sound like such a bad thing, to be honest).

Image Credits: TechCrunch

Thankfully, all of this is something that Microsoft and Asobo can fix. They’ll just need to adjust their algorithms, and because a lot of the data is streamed, the updates should be virtually automatic. The fact that they haven’t done so yet is a bit of a surprise.

Image Credits: TechCrunch

Chances are you’ll want to fly over your house the day you get Flight Simulator. If you live in the right city (and the right part of that city), you’ll likely be lucky and actually see your house with its individual texture. But for some cities, including London, for example, the game only shows standard textures, and while Microsoft does a good job at matching the outlines of buildings in cities where it doesn’t do photogrammetry, it’s odd that London or Amsterdam aren’t on that list (though London apparently features a couple of wind turbines in the city center now), while Münster, Germany is.

Once you get to altitude, all of those problems obviously go away (or at least you won’t see them). But given the graphics, you’ll want to spend a lot of time at 2,000 feet or below.

Image Credits: TechCrunch

What really struck me in playing the game in its current state is how those graphical inconsistencies set the standard for the rest of the experience. The team says its focus is 100% on making the simulator as realistic as possible, but then the virtual air traffic control often doesn’t use standard phraseology, for example, or fails to hand you off to the right departure control when you leave a major airport, for example. The airplane models look great and feel pretty close to real (at least for the ones I’ve flown myself), but some currently show the wrong airspeed, for example. Some planes use modern glass cockpits with the Garmin 1000 and G3X, but those still feel severely limited.

But let me be clear here. Despite all of this, even in its beta state, Flight Simulator is a technical marvel and it will only get better over time.

Image Credits: TechCrunch

Let’s walk through the user experience a bit. The install on PC (the Xbox version will come at some point in the future) is a process that downloads a good 90GB so that you can play offline as well. The install process asks you if you are OK with streaming data, too, and that can quickly add up. After reinstalling the game and doing a few flights for screenshots, the game had downloaded about 10GB already — it adds up quickly and is something you should be aware of if you’re on a metered connection.

[gallery ids="2024272,2024274,2024275,2024276,2024277,2024278,2024281"]

Once past the long install, you’ll be greeted by a menu screen that lets you start a new flight, go for one of the landing challenges or other activities the team has set up (they are really proud of their Courchevel scenery) and go through the games’ flight training program.

Image Credits: Microsoft

That training section walks you through eight activities that will help you get the basics of flying a Cessna 152. Most take fewer than 10 minutes and you’ll get a bit of a de-brief after, but I’m not sure it’s enough to keep a novice from getting frustrated quickly (while more advanced players will just skip this section altogether anyway).

I mostly spent my time flying the small general aviation planes in the sim, but if you prefer a Boeing 747 or Airbus 320neo, you get that option, too, as well as some turboprops and business jets. I’ll spend some more time with those before the official launch. All of the planes are beautifully detailed inside and out and except for a few bugs, everything works as expected.

To actually start playing, you’ll head for the world map and choose where you want to start your flight. What’s nice here is that you can pick any spot on your map, not just airports. That makes it easy to start flying over a city, for example. As you zoom into the map, you can see airports and landmarks (where the landmarks are either real sights like Germany’s Neuschwanstein Castle or cities that have photogrammetry data). If a town doesn’t have photogrammetry data, it will not appear on the map.

As of now, the flight planning features are pretty basic. For visual flights, you can go direct or VOR to VOR, and that’s it. For IFR flights, you choose low or high-altitude airways. You can’t really adjust any of these, just accept what the simulator gives you. That’s not really how flight planning works (at the very least you would want to take the local weather into account), so it would be nice if you could customize your route a bit more. Microsoft partnered with NavBlue for airspace data, though the built-in maps don’t do much with this data and don’t even show you the vertical boundaries of the airspace you are in.

Image Credits: TechCrunch

It’s always hard to compare the plane models and how they react to the real thing. Best I can tell, at least the single-engine Cessnas that I’m familiar with mostly handle in the same way I would expect them to in reality. Rudder controls feel a bit overly sensitive by default, but that’s relatively easy to adjust. I only played with a HOTAS-style joystick and rudder setup. I wouldn’t recommend playing with a mouse and keyboard, but your mileage may vary.

Live traffic works well, but none of the general aviation traffic around my local airports seems to show up, even though Microsoft partner FlightAware shows it.

As for the real/AI traffic in general, the sim does a pretty good job managing that. In the beta, you won’t really see the liveries of any real airlines yet — at least for the most part — I spotted the occasional United plane in the latest builds. Given some of Microsoft’s own videos, more are coming soon. Except for the built-in models you can fly in the sim, Flight Simulator is still missing a library of other airplane models for AI traffic, though again, I would assume that’s in the works, too.

Image Credits: TechCrunch

We’re three weeks out from launch. I would expect the team to be able to fix many of these issues and we’ll revisit all of them for our final review. My frustration with the current state of the game is that it’s so often so close to perfect that when it falls short of that, it’s especially jarring because it yanks you out of the experience.

Don’t get me wrong, though, flying in FS2020 is already a great experience. Even when there’s no photogrammetry, cities and villages look great once you get over 3,000 feet or so. The weather and cloud simulation — in real time — beats any add-on for today’s flight simulators. Airports still need work, but having cars drive around and flaggers walking around planes that are pushing back help make the world feel more alive. Wind affects the waves on lakes and oceans (and windsocks on airports). This is truly a next-generation flight simulator.

Image Credits: Microsoft

Microsoft and Asobo have to walk a fine line between making Flight Simulator the sim that hardcore fans want and an accessible game that brings in new players. I’ve played every version of Flight Simulator since the 90s, so getting started took exactly zero time. My sense is that new players simply looking for a good time may feel a bit lost at first, despite Microsoft adding landing challenges and other more gamified elements to the sim. In a press briefing, the Asobo team regularly stressed that it aimed for realism over anything else — and I’m perfectly ok with that. We’ll have to see if that translates to being a fun experience for casual players, too.

Leverage AI to optimize customer service outcomes

By Walter Thompson
Nitesh Dudhia Contributor
Nitesh Dudhia is co-founder and CBO of Aikon Labs Pvt Ltd. Nitesh has worked on market opportunity identification, business planning and strategy formulation and execution for digital transformation.

As offices worldwide shift to remote work, our interactions with customers and colleagues have evolved in tandem. Professionals who once relied on face-to-face communication and firm handshakes must now close deals in a world where both are rare. Coworkers we once sat beside every day are now only available over Slack and Zoom, changing the nature of internal communication as well.

While this new reality presents a challenge, the advancement of key technologies allows us to not just adapt, but thrive. We are now on the precipice of the biggest revolution in workplace communication since the invention of the telephone.

It’s not enough to simply accept the new status quo, particularly as the overall economic climate remains tenuous. Artificial intelligence has much to offer in improving the way we speak to one another in the social distance era, and has already seen wide adoption in certain areas. Much of this algorithmic work has gone on behind the scenes of our most-used apps, such as Google Meet’s noise-canceling technology, which uses an AI to mute certain extraneous sounds on video calls. Other advances work in real-time right before our eyes — like Zoom’s myriad virtual backgrounds, or the automatic transcription and translation technology built into most video conferencing apps.

This kind of technology has helped employees realize that, despite the unprecedented shift to remote work, digital conversations do not just strive to recreate the in-person experience — rather, they can improve upon the way we communicate entirely.

It’s estimated that 65% of the workforce will be working remotely within the next five years. With a more hands-on approach to AI — that is, using the technology to actually augment everyday communications — workers can gain insight into concepts, workflows and ideas that would otherwise go unnoticed.

Your customer service experience

Roughly 55% of the data companies collect falls into the category of “dark data”: information that goes completely unused, kept on an internal server until it’s eventually wiped. Any company with a customer service department is invariably growing their stock of dark data with every chat log, email exchange and recorded call.

When a customer phones in with a query or complaint, they’re told early on that their call “may be recorded for quality assurance purposes.” Given how cheap data storage has become, there’s no “maybe” about it. The question is what to do with this data.

Investment in AI startups slips to three-year low

By Alex Wilhelm

The fortunes of startups that leverage artificial intelligence have soared dramatically in recent years.

These AI-powered startups have seen quarterly investment totals rise from a few hundred rounds and a few billion dollars each quarter to 1,245 rounds and $17.3 billion in the second and third quarters of 2019, according to data from CB Insights. The rise in dollars chasing AI startups has been huge, demonstrating strong venture capital interest in the cohort.

But in recent quarters, the trend has slowed as VC deals for AI-powered startups fell off.


The Exchange explores startups, markets and money. You can read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


A new report from the business-data company looking at the second quarter of venture capital results for global AI startups shows historically strong but declining investing rates for the upstart firms. During a pandemic and widespread recession, this is not a complete surprise; other areas of VC investment have also fallen in recent quarters. This is The Exchange’s second look at quarterly data in the startup category, something partially spurred by our interest in the economics of the startups that make up the group.

The scale of decline is notable, however, as is the national breakdown of VC investment into AI. (The United States is doing better than you probably guessed, if you have only listened to politicians lately.)

Let’s unpack the latest results, determine how investing patterns have changed by stage and examine how different countries compare when it comes to deal and dollar volume for AI-powered startups.

Global declines, US dominance

In the second quarter of 2020, global investment into AI startups fell to 458 deals worth $7.2 billion. According to the CB Insights dataset, the deal volume is the lowest for 12 quarters, or since Q2 2017 when 387 investments into AI startups were worth $4.7 billion.

Where is voice tech going?

By Richard Dal Porto
Mark Persaud Contributor
Mark Persaud is digital product manager and practice lead at Moonshot by Pactera, a digital innovation company that leads global clients through the next era of digital products with a heavy emphasis on artificial intelligence, data and continuous software delivery.

2020 has been all but normal. For businesses and brands. For innovation. For people.

The trajectory of business growth strategies, travel plans and lives have been drastically altered due to the COVID-19 pandemic, a global economic downturn with supply chain and market issues, and a fight for equality in the Black Lives Matter movement — amongst all that complicated lives and businesses already.

One of the biggest stories in emerging technology is the growth of different types of voice assistants:

  • Niche assistants such as Aider that provide back-office support.
  • Branded in-house assistants such as those offered by BBC and Snapchat.
  • White-label solutions such as Houndify that provide lots of capabilities and configurable tool sets.

With so many assistants proliferating globally, voice will become a commodity like a website or an app. And that’s not a bad thing — at least in the name of progress. It will soon (read: over the next couple years) become table stakes for a business to have voice as an interaction channel for a lovable experience that users expect. Consider that feeling you get when you realize a business doesn’t have a website: It makes you question its validity and reputation for quality. Voice isn’t quite there yet, but it’s moving in that direction.

Voice assistant adoption and usage are still on the rise

Adoption of any new technology is key. A key inhibitor of technology is often distribution, but this has not been the case with voice. Apple, Google, and Baidu have reported hundreds of millions of devices using voice, and Amazon has 200 million users. Amazon has a slightly more difficult job since they’re not in the smartphone market, which allows for greater voice assistant distribution for Apple and Google.

Image Credits: Mark Persaud

But are people using devices? Google said recently there are 500 million monthly active users of Google Assistant. Not far behind are active Apple users with 375 million. Large numbers of people are using voice assistants, not just owning them. That’s a sign of technology gaining momentum — the technology is at a price point and within digital and personal ecosystems that make it right for user adoption. The pandemic has only exacerbated the use as Edison reported between March and April — a peak time for sheltering in place across the U.S.

Instrumental raises $20M to scale its AI-powered manufacturing tech

By Alex Wilhelm

This morning Instrumental, a startup that uses vision-powered AI to detect manufacturing anomalies, announced that it has closed a $20 million Series B led by Canaan Partners. The company had previously raised $10.3 million across two rounds, including a $7.5 million Series A in mid-2017.

According to a release, the Series B was participated in by other venture groups, including Series A investors Root Ventures, Eclipse Ventures, and First Round Capital, which also led its Seed round. Stanford StartX also took part in the new investment.

Anna-Katrina Shedletsky, via the company.

Instrumental’s technology is a hybrid of hardware and software, with a focus on the latter.

TechCrunch caught up with the company’s founder and CEO Anna-Katrina Shedletsky to better understand its tech, and its business model. And we asked participating Canaan partner Hrach Simonian about the business metrics and milestones that led him to lead the deal.

Tech

Instrumental’s tech is a combination of cameras and code. The startup installs its hardware on manufacturing lines, employing learning software to ferret out anomalies using data from small sample sets. The company’s goal to help other businesses that build physical products boost yields and save time.

“Our customers identify design, quality, and process issues weeks faster than their competitors and get to mass production with a higher quality product that can be built for significantly less money,” she said in an email to TechCrunch.

According to Shedletsky, who previously worked at Apple in design and manufacturing capacities, Instrumental uses commodity hardware. The startup’s software is what matters, allowing its camera-units to train with as few as 30 units and simple labeling training. Notably, given the reduced-capacity Internet available at many manufacturing facilities in China where the company often works, its hardware can handle on-site data processing to prevent upload/download lag.

It’s not easy to get tech installed onto manufacturing lines, the company told TechCrunch, as it’s easy to get fired for stopping a production run. This can make it hard for companies like Instrumental to get their foot in the door.

Instrumental works around the problem by getting its tech installed on manufacturing lines when they are in pre-production development. If the startup can prove value there, its tech can be rolled out when the lines move from development to production. And, if Instrumental’s tech works with initial lines, it can be scaled across other manufacturing lines that are spun up, something called “replicating lines.”

Instrumental hardware unit, via the company.

The startup has two products: One for manufacturing lines in development, and one for those in production. Unlike enterprise software contracts that are often sold on a yearly-cadence, Instrumental’s manufacturing deals can ramp up based on volume through a process that its CEO called “continuous sale.”

The model allows the company to charge more, more quickly than an enterprise software contract waiting for its re-up period to allow for renegotiation, boosting how quickly Instrumental can grow its business,

Money

Flush with $20 million, what does Instrumental have planned? Shedletsky told TechCrunch that her first goal is to expand its business in the electronics space, a part of the manufacturing world where the startup has seen initial customer traction.

To support that effort, Instrumental is building out its go-to-market functionality, and continuing to work on its core technology she said.

After living off its Series A for around twice as long as many venture-backed companies tend to, TechCrunch was curious how quickly Instrumental intends to deploy its larger Series B. According to its CEO, the startup plans on being principled, but not slow. She stressed that she’s working to build a long-term company, and that she wants to create something that is both sustainable, and large.

Lacking hard growth metrics, TechCrunch was curious what attracted Canaan to Instrumental at this juncture. According to the Hrach Simonian, a general partner at the firm, “Instrumental’s tools are quickly becoming a business imperative,” something that can be seen in its “renewal rates with big customers” he said, describing them as “extraordinarily high.”

Given the sheer scale of global electronics is enormous, giving Instrumental nearly infinite TAM to sell into. Let’s see how quickly the startup can grow.

Facebook’s ‘Red Team’ Hacks Its Own AI Programs

By Tom Simonite
Attackers increasingly try to confuse and bypass machine-learning systems. So the companies that deploy them are getting creative.

Facetune maker Lightricks brings its popular selfie retouching features to video

By Sarah Perez

Lightricks, the startup behind a suite of photo and video editing apps — including most notably, selfie editor Facetune 2 — is taking its retouching capabilities to video. Today, the company is launching Facetune Video, a selfie video editing app, that allows users to retouch and edit their selfie and portrait videos using a set of A.I.-powered tools.

While there are other selfie video editors on the market, most today are generally focused on edits involving filters and presets, virtually adding makeup, or using AR or stickers to decorate your video in some way. Facetune Video, meanwhile, is focused on creating a photorealistic video by offering a set of features similar to those found in Lightricks’ flagship app, Facetune .

That means users are able to retouch their face with tools for skin smoothing, teeth whitening, and face reshaping, plus eye color, makeup, conceal, glow, and matte features. In addition, users can tweak tools for general video edits, like adjusting the brightness, contrast, color, and more, like other video editing apps allow for. And these edits can be applied in real-time to see how they look as the video plays, instead of after the fact.

In addition, users can apply the effect to one frame only and Facetune Video’s post-processing technology and neural networks will simultaneously apply an effect to the same area of every frame throughout the entire video, making it easier to quickly retouch a problem area without having to go frame-by-frame to do so.

“In Facetune Video, the 3D face model plays a significant role; users edit only one video frame, but it’s on us, behind-the-scenes, to automatically project the location of their edits to 2D face mesh coordinates derived from the 3D face model, and then apply them consistently on all other frames in the video,” explains Lightricks co-founder and CEO Zeev Farbman. “A Lightricks app needs to be not only powerful, but fun to use, so it’s critical to us that this all happens quickly and seamlessly,” he says.

Users can also save their favorite editing functions as “presets” allowing them to quickly apply their preferred settings to any video automatically.

In a future version of the app, the company plans to introduce a “heal” function which, like Facetune, will allow users to easily remove blemishes.

Image Credits: Lightricks

The technology that makes these selfie video edits work involves Lightricks’ deep neural networks that utilize facial feature detection and geometry analysis for the app’s retouching capabilities. These processes work in real-time without having to transmit data to the cloud first. There’s also no lag or delay while files are rendering.

In addition, Facetune Video uses the facial feature detection along with 3D face modeling A.I. to ensure that every part of the user’s face is captured for editing and retouching, the company says.

“What we’re also doing is taking advantage of lightweight neural networks. Before the user has even begun to retouch their selfie video, A.I.-powered algorithms are already working so that the user experience is quick and interactive,” says Farbman.

The app also does automated segmentation of more complex parts of the face like the interior of the eye, hair, or the lips, which helps it achieve a more accurate end result.

“It’s finding a balance between accuracy in the strength of the face modeling we use, and speed,” Farbman adds.

One challenge here was overcoming the issue of jittering effects, which is when the applied effect shakes as the video plays. The company didn’t want its resulting videos to have this problem, which makes the end result look gimmicky, so it worked to eliminate any shake-like effects and other face tracking issues so videos would look more polished and professional in the end.

The app builds off the company’s existing success and brand recognition with Facetune. With the new app, for example, the retouch algorithms mimic the original Facetune 2 experience, so users familiar with Facetune 2 will be able to quickly get the hang of the retouch tools.

Image Credits: Lightricks

The launch of the new app expands Lightricks further in the direction of video, which has become a more popular way of expressing yourself across social media, thanks to the growing use of apps like TikTok and features like Instagram Stories, for example.

Before, Lightricks’ flagship video product, however, was Videoleap, which focused on more traditional video editing, and not selfie videos where face retouching could be used.

Facetune has become so widely used, its name has become a verb — as in, “she facetunes her photos.” But it has also been criticized at times for its unrealistic results. (Of course, that’s more on the app’s users sliding the smoothing bar all the way to end.)

Across its suite of apps, which includes the original Facetune app (Facetune Classic), Facetune 2, Seen (for Stories), Photofox, Video Leap, Enlight Quickshot, Pixaloop, Boosted, and others, including a newly launched artistic editor, Quickart, the company has generated over 350 million downloads.

Its apps also now reach nearly 200 million users worldwide. And through its subscription model, Lightricks is now seeing what Farbman describes as revenues that are “increasing exponentially year-over-year,” but that are being continually reinvested into new products.

Like its other apps, Facetune Video will monetize by way of subscriptions. The app is free to use by will offer a VIP subscription for more features, at a price point of $8 per month, $36 per year, or a one-time purchase of $70.

Facetune 2 subscribers will get a discount on annual subscriptions, as well. The company will also sell the app in its Social Media Kit bundle on the App Store, which includes Facetune Video, Facetune 2, Seen and soon, an undisclosed fourth app. However, the company isn’t yet offering a single subscription that provides access to all bundled apps.

❌