Garry Kasparov is a political activist who’s written books and articles on artificial intelligence, cybersecurity and online privacy, but he’s best known for being the former World Chess Champion who took on the IBM computer known as Big Blue in the mid-1990s.
I spoke to Kasparov before a speaking engagement at the Collision Conference last month where he was participating in his role as Avast Security Ambassador. Our discussion covered a lot of ground, from his role as security ambassador to the role of AI.
TechCrunch: How did you become a security ambassador for Avast?
Garry Kasparov: It started almost by accident. I was invited by one of my friends, who knew the previous Avast CEO (Vince Steckler) to be the guest speaker at the opening of their new headquarters in Prague. I met the team and very quickly we recognized that we could work together very effectively since Avast wanted an ambassador.
I thought that it would be a great combination because it’s about cybersecurity, and it’s also about customers, about individual rights, which is related to human rights, and it also had a little bit of a political element of course. But most importantly, it’s a combination of privacy and security and I felt that with my record of working for human rights, and also writing about individuals and privacy and also having some experience with computers, that it would be a good match.
Now it’s my fourth year and it seems that many of the things we have been discussing at conferences when I have spoken about the role of AI in our lives, and many of the discussions that we thought were theoretical, have become more practical.
What were those discussions like?
One of the favorite topics that was always raised at these conferences is whether AI will be a helping hand or threat. And my view has been that it’s neither because I have always said that AI was neither a magic wand nor a Terminator. It’s a tool. And it’s up to us to find the best way of using it and applying its enormous power to our good.
SUSE, which describes itself as ‘the world’s largest independent open source company,’ today announced that it has acquired Rancher Labs, a company that has long focused on making it easier for enterprises to make their container clusters.
The two companies did not disclose the price of the acquisition, but Rancher was well funded, with a total of $95 million in investments. It’s also worth mentioning that it’s only been a few months since the company announced its $40 million Series D round led by Telstra Ventures. Other investors include the likes of Mayfield and Nexus Venture Partners, GRC SinoGreen and F&G Ventures.
Like similar companies, Rancher’s original focus was first on Docker infrastructure before it pivoted to putting its emphasis on Kubernetes once that became the de facto standard for container orchestration. Unsurprisingly, this is also why SUSE is now acquiring this company. After a number of ups and downs — and various ownership changes — SUSE has now found its footing again and today’s acquisition shows that its aiming to capitalize on its current strengths.
Just last month, the company reported that the annual contract value of its booking increased by 30% year over year and that it saw a 63% increase in customer deals worth more than $1 million in the last quarter, with its cloud revenue growing 70%. While it is still in the Linux distribution business that the company was founded on, today’s SUSE is a very different company, offering various enterprise platforms (including its Cloud Foundry-based Cloud Application Platform), solutions and services. And while it already offered a Kubernetes-based container platform, Rancher’s expertise will only help it to build out this business.
“This is an incredible moment for our industry, as two open source leaders are joining forces. The merger of a leader in Enterprise Linux, Edge Computing and AI with a leader in Enterprise Kubernetes Management will disrupt the market to help customers accelerate their digital transformation journeys,” said SUSE CEO Melissa Di Donato in today’s announcement. “Only the combination of SUSE and Rancher will have the depth of a globally supported and 100% true open source portfolio, including cloud native technologies, to help our customers seamlessly innovate across their business from the edge to the core to the cloud.”
The company describes today’s acquisition as the first step in its ‘inorganic growth strategy’ and Di Donato notes that this acquisition will allow the company to “play an even more strategic role with cloud service providers, independent hardware vendors, systems integrators and value-added resellers who are eager to provide greater customer experiences.”
Most sales teams earn a commission after a sale closes, but nothing prior to that. Yet there are a variety of signals along the way that indicate the sales process is progressing, and SetSail, a startup from some former Google engineers, is using machine learning to figure out what those signals are, and how to compensate salespeople as they move along the path to a sale, not just after they close the deal.
Today, the startup announced a $7 million investment led by Wing Venture Capital with help from Operator Collective and Team8. Under the terms of the deal, Leyla Seka from Operator will be joining the board. Today’s investment brings the total raised to $11 million, according to the company.
CEO and co-founder Haggai Levi says his company is based on the idea that commission alone is not a good way to measure sales success, and that it is in fact a lagging indicator. “We came up with a different approach. We use machine learning to create progress-based incentives,” Levi explained
To do that they rely on machine learning to discover the signals that are coming from the customer that indicate that the deal is moving forward, and using a points system, companies can begin compensating reps on hitting these milestones, even before the sale closes.
The seeds for the idea behind SetSail were planted years ago when the three founders were working at Google tinkering with ways to motivate sales reps beyond pure commission. From a behavioral perspective, Levi and his co-founders found that reps were taking fewer risks with a pure commission approach and they wanted to find a way to change that. The incremental compensation system achieves that.
“If I’m closing the deal, I’m getting my commission. If I’m not closing the deal, I’m getting nothing. That means from a behavioral point of view, I would take the shortest path to win a deal, and I would take the minimum risk possible. So if there’s a competitive situation I will try to avoid that,” he said.
They look at things like appointments, emails and call transcripts. The signals will vary by customer. One may find an appointment with CIO is a good signal a deal is on the right trajectory, but to avoid having reps gaming the system by filling the CRM with the kinds of positive signals the company is looking for, they only rely on objective data, rather than any kind of self-reporting information from reps themselves.
The team eventually built a system like this inside Google, and in 2018, left to build a solution for the rest of the world that does something similar.
As the company grows, Levi says he is building a diverse team, not only because it’s the right thing to do, but because it simply makes good business sense. “The reality is that we’re building a product for a diverse audience, and if we don’t have a diverse team we would never be able to build the right product,” he explained.
The company’s unique approach to sales compensation is resonating with customers like Dropbox, Lyft and Pendo, who are looking for new ways to motivate sales teams, especially during a pandemic when there may be a longer sales cycle. This kind of system provides a way to compensate sales teams more incrementally and reward positive approaches that have proven to result in sales.
Hungarian autonomous driving startup AImotive is leveraging its technology to address a different industry and growing need: autonomous satellite operation. AImotive is teaming up with C3S, a supplier of satellite and space-based technologies, to develop a hardware platform for performing AI operations onboard satellites. AImotive’s aiWare neural network accelerator will be optimized by C3S for use on satellites, which have a set of operating conditions that in many ways resembles those onboard cars on the road – but with more stringent requirements in terms of power management, and environmental operating hazards.
The goal of the team-up is to have AImotive’s technology working on satellites that are actually operational on orbit by the second half of next year. The projected applications of onboard neural network acceleration extend to a number of different functions according to the companies, including telecommunications, Earth imaging and observation, autonomously docking satellites with other spacecraft, deep space mining and more.
While it’s true that most satellites operate essentially in an automated fashion already – mean gin they’re not generally manually flown at every given moment – true neural network-based onboard AI smarts would provide them with much more autonomy when it comes to performing tasks, like imaging a specific area or looking for specific markers in ground or space-based targets. Also, AImotive and C3S believe that local processing of data has the potential to be a significant game-changer when it comes to the satellite business.
Currently, most of the processing of data collected by satellites is done after the raw information is transmitted to ground stations. That can actually result in a lot of lag time between data collection, and delivery of processed data to customers, particularly when the satellite operator or another go-between is acting as the processor on behalf of the client rather than just delivering raw info (and doing this analysis is also a more lucrative proposition for the data provider, or course).
AImotive’s tech could mean that processing happens locally, on the satellite where the information is captured. There’s been a big shift towards this kind of ‘computing at the edge’ in the ground-based IoT world, and it only makes sense to replicate that in space, for many of the same reasons – including that it reduces time to delivery, meaning more responsive service for paying customers.
Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.
The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.
“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”
Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.
Yesterday evening Palantir, the quasi-secretive data mining and analysis firm, publicly announced that it has privately filed to go public.
The disclosure came in the wake of Palantir raising new capital, taking on hundreds of millions of dollars before its planned public offering. According to Crunchbase data, Palantir has raised billions while private, making its debut a marquee affair in the worlds of technology, startups and venture capital.
As TechCrunch reported yesterday, Palantir has a controversial product history, including helping locate immigrants for the Immigration and Customs Enforcement agency, connecting databases for intelligence agencies and recently winning no-bid contracts to gather data about the COVID-19 pandemic for the White House Pandemic Task Force.
The Exchange is a daily look at startups and the private markets for Extra Crunch subscribers; use code EXCHANGE to get full access and take 25% off your subscription.
The company’s filing comes after a long incubation period; it’s been 17 years since Palantir’s founding in 2003. Since then, its reported financial performance and fundraising history have become sufficiently convoluted that I couldn’t tell you this morning how big the company really is or how much it raised before its most recent investment.
To prep us for its eventual public IPO filing, let’s go back in time and collect data points from Palantir’s reported history. This way when we do get the company’s S-1 filing, we’ll better understand what we’re looking at.
Even with companies that aren’t privacy-conscious, it can be hard to craft a comprehensive history of their business activities from when they were private. With Palantir, it’s even trickier.
Still, leaning on more than a decade of TechCrunch reporting, Crunchbase data, other publications and Craft.co, what follows is a reasonable look at what has been reported about Palantir through time. Of course, we’ll know more when we get the S-1.
A few years back, startups focusing on artificial intelligence had a whiff of bullshit about them; venture capitalists became inured to young tech companies claiming that their new AI-powered product was going to change the world as hype exceeded product reality.
But in the time since, AI-powered startups have matured into real companies, with investors stepping up to fund their growth. In niches, from medical imaging, of course, to speech recognition, machine learning and deep learning and neural nets and everything else that one might scoop into the AI bucket has seemed to have grown neatly in recent quarters.
But AI is not the only startup niche appearing to enjoy tailwinds lately. No-code and low-code startups have also enjoyed increasing media recognition, and, as TechCrunch has covered, notable venture capital rounds.
Sitting in the middle of the two trends, a startup called MonkeyLearn wants to bring low-code AI to companies of all sizes. And the firm just raised $2.2 million. Let’s take a look.
Starting with the round, MonkeyLearn has raised $2.2 million in a round led by Uncork Capital and Bling Capital. Speaking with Raúl Garreta, a co-founder at the company and also its CEO, TechCrunch learned that MonkeyLearn started off as a more developer-focused service that provided machine learning tooling via an API. But after demand materialized from people who couldn’t code to use the company’s tech for text analysis, the company wound up heading in a slightly different direction.
Garreta gave TechCrunch a demo of the company’s service, which allows users to upload data — think rows of text in an Excel file, for example — and quickly train MonkeyLearn’s software to parse out what they are looking for. After the model is trained over the course of a few minutes, it can then be set to work on a full data set.
According to Garreta, text analysis has a lot of demand in corporate environments, from categories like support ticket sorting to sentiment analysis.
But MonkeyLearn’s product that TechCrunch saw is not the company’s final vision. Today the service focuses on data analysis. In time, Garreta wants it to do more with data visualization, providing graphing and other similar outputs to give more of a dashboard-feel to its product.
At the core of MonkeyLearn’s early market traction that helped it land its seed round is the ever-increasing need for non-developers to collect, parse, act on and share data inside of their workplace. If you’ve ever worked nearby to a startup’s marketing or customer success team, you understand this phenomenon. MonkeyLearn wants to give non-developer teams the tools they need to understand data sets without forcing them to go find the engineering team and argue for a spot on the roadmap.
“Our vision is to make AI approachable by providing a toolkit for teams to actually use AI in their daily operations,” Garreta said in a release. MonkeyLearn is theoretically well-situated in the market. Companies are increasingly data-driven at the same time as the market is strapped for employees who can make data sing.
The startup has a free tier, and a few paid tiers, along with add-ons and a one-off option. You can call that the “all of the above” pricing model, which is fine, given the youth of the company; startups are allowed to experiment.
After slower than anticipated early fundraising, MonkeyLearn told TechCrunch that it could have raised double in its seed round what it wound up accepting.
What plans does the company have for the new capital? A more aggressive go-to-market motion, and a more formal sales team, it said. As MonkeyLearn sells to to mid-market and enterprise firms, Garreta explained, a more formal sales team is needed, though he also emphasized that founders must start the selling process at a startup.
As with most seed companies that raise capital, there’s a lot to like with MonkeyLearn. Let’s see how well it executes and how fast it can get to a Series A.
Computer vision summit CVPR has just (virtually) taken place, and like other CV-focused conferences, there are quite a few interesting papers. More than I could possibly write up individually, in fact, so I’ve collected the most promising ones from major companies here.
Facebook, Google, Amazon and Microsoft all shared papers at the conference — and others too, I’m sure — but I’m sticking to the big hitters for this column. (If you’re interested in the papers deemed most meritorious by attendees and judges, the nominees and awards are listed here.)
Redmond has the most interesting papers this year, in my opinion, because they cover several nonobvious real-life needs.
One is documenting that shoebox we or perhaps our parents filled with old 3x5s and other film photos. Of course there are services that help with this already, but if photos are creased, torn, or otherwise damaged, you generally just get a high-resolution scan of that damage. Microsoft has created a system to automatically repair such photos, and the results look mighty good.
The problem is as much identifying the types of degradation a photo suffers from as it is fixing them. The solution is simple, write the authors: “We propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs.” Amazing no one tried it before!
Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to “put in place stronger regulations to govern the ethical use of facial recognition technology.”
But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.
We can develop amazing AI that works in the world in largely unbiased ways. But to accomplish this, AI can’t be just a subfield of computer science (CS) and computer engineering (CE), like it is right now. We must create an academic discipline of AI that takes the complexity of human behavior into account. We need to move from computer science-owned AI to computer science-enabled AI. The problems with AI don’t occur in the lab; they occur when scientists move the tech into the real world of people. Training data in the CS lab often lacks the context and complexity of the world you and I inhabit. This flaw perpetuates biases.
AI-powered algorithms have been found to display bias against people of color and against women. In 2014, for example, Amazon found that an AI algorithm it developed to automate headhunting taught itself to bias against female candidates. MIT researchers reported in January 2019 that facial recognition software is less accurate in identifying humans with darker pigmentation. Most recently, in a study late last year by the National Institute of Standards and Technology (NIST), researchers found evidence of racial bias in nearly 200 facial recognition algorithms.
In spite of the countless examples of AI errors, the zeal continues. This is why the IBM and Amazon announcements generated so much positive news coverage. Global use of artificial intelligence grew by 270% from 2015 to 2019, with the market expected to generate revenue of $118.6 billion by 2025. According to Gallup, nearly 90% Americans are already using AI products in their everyday lives – often without even realizing it.
Beyond a 12-month hiatus, we must acknowledge that while building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law and politics. But despite our increasingly ubiquitous use of AI, AI as a field of study is still lumped into the fields of CS and CE. At North Carolina State University, for example, algorithms and AI are taught in the CS program. MIT houses the study of AI under both CS and CE. AI must make it into humanities programs, race and gender studies curricula, and business schools. Let’s develop an AI track in political science departments. In my own program at Georgetown University, we teach AI and Machine Learning concepts to Security Studies students. This needs to become common practice.
Without a broader approach to the professionalization of AI, we will almost certainly perpetuate biases and discriminatory practices in existence today. We just may discriminate at a lower cost — not a noble goal for technology. We require the intentional establishment of a field of AI whose purpose is to understand the development of neural networks and the social contexts into which the technology will be deployed.
In computer engineering, a student studies programming and computer fundamentals. In computer science, they study computational and programmatic theory, including the basis of algorithmic learning. These are solid foundations for the study of AI – but they should only be considered components. These foundations are necessary for understanding the field of AI but not sufficient on their own.
For the population to gain comfort with broad deployment of AI so that tech companies like Amazon and IBM, and countless others, can deploy these innovations, the entire discipline needs to move beyond the CS lab. Those who work in disciplines like psychology, sociology, anthropology and neuroscience are needed. Understanding human behavior patterns, biases in data generation processes are needed. I could not have created the software I developed to identify human trafficking, money laundering and other illicit behaviors without my background in behavioral science.
Responsibly managing machine learning processes is no longer just a desirable component of progress but a necessary one. We have to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow, and the social sciences and humanities provide the keys. We can only accomplish this if a new field of AI, encompassing all of these disciplines, is created.
The COVID-19 pandemic pushed the music industry to experiment seriously with virtual concerts.
Historically, musicians and their managers have been careful about challenging the traditional concert model that became their main source of income as revenue from album sales disappeared.
Is the current surge of virtual concerts here to stay or will it be abandoned as soon as large in-person gatherings are permitted again and the novelty of concerts in Fortnite wears off?
For the middle tier of recording artists, virtual concerts are shaping up to be a worthwhile part of their business portfolio, generating healthy income and engaging a geographically dispersed base of core fans. For the top tier of artists — those who perform in stadiums and arenas — the opportunity cost of virtual concerts doesn’t make financial sense to do very often once in-person concerts return. That said, a couple such performances each year can unlock a lot of the untapped potential revenue from fans who can’t attend their normal concerts.
There’s no opportunity cost to trying a virtual concert during a pandemic. Artists aren’t performing, touring, shooting videos or even doing in-person sessions with songwriters. With everyone stuck at home, fans will forgive a disappointing attempt at performing online and artists have time to experiment. Live Nation, the dominant concert promotion and venue management company, has even converted its site to curating a schedule of virtual performances.
Virtual concerts have been growing in three formats: video streaming platforms, within the virtual worlds of video games and virtual reality.
Concerts via video
Google’s SmartReply, the four-year-old, A.I.-based technology that helps suggest responses to messages in Gmail, Android’s Messages, Play Developer Console and elsewhere, is now being made available to YouTube Creators. Google announced today the launch of an updated version of SmartReply built for YouTube, which will allow creators to more easily and quickly interact with their fans in the comments.
The feature is being rolled out to YouTube Studio, the online dashboard creators use to manage their YouTube presence, check their stats, grow their channels and engage fans. From YouTube Studio’s comments section, creators can filter, view and respond to comments from across their channels.
For creators with a large YouTube following, responding to comments can be a time-consuming process. That’s where SmartReply aims to help.
Image Credits: Google
Instead of manually typing out all their responses, creators will be able to instead click one of the suggested replies to respond to comments their viewers post. For example, if a fan says something about wanting to see what’s coming next, the SmartReply feature may suggest a response like “Thank you!” or “More to come!”
Unlike the SmartReply feature built for email, where the technology has to process words and short phrases, the version of SmartReply designed for YouTube has to also be able to handle a more diverse set of content — like emoji, ASCII art or language switching, the company notes. YouTube commenters also often post using abbreviated words, slang, and inconsistent use of punctuation. This made it more challenging to implement the system on YouTube.
Image Credits: Google
Google detailed how it overcame these and other technical challenges in a post on its Google AI Blog, published today.
In addition, Google said it wanted a system where SmartReply only made suggestions when it’s highly likely the creator would want to reply to the comment and when the feature is able to suggest a sensible response. This required training the system to identify which comments should trigger the feature.
At launch, SmartReply is being made available for both English and Spanish comments — and it’s the first cross-lingual and character byte-based version of the technology, Google says.
Because of the approach SmartReply is now using, the company believes it will be able to make the feature available to many more languages in the future.
Amazon has announced that it will acquire Zoox, a self-driving startup founded in 2014, which has raised nearly $1 billion in funding and which aims to develop autonomous driving technology, including vehicles, for the purposes of providing a full-stack solution for ride-hailing.
Zoox will continue to exist as a standalone business, according to Amazon’s announcement, with current CEO Aicha Evans continuing in her role, as well as CTO and co-founder Jesse Levinson. Their overall company mission will also remain the same, the release notes. The Financial Time reports that the deal is worth $1.2 billion.
The Wall Street Journal had reported at the end of May that Amazon was looking at Zoox as a potential acquisition target, and that the deal had reached the advanced stages.
Zoox has chosen one of the most expensive possible paths in the autonomous driving industry, seeking to build a fit-for-purpose self-driving passenger vehicle from the ground up, along with the software and AI end to provide its autonomous driving capabilities. Zoox has done some notable cost-cutting in the past year, and it brought in CEO Evans in early 2019 from Intel, likely with an eye toward leveraging her experience to help the company move toward commercialization.
With a deep-pocketed parent like Amazon, Zoox should gain the runway it needs to keep up with its primary rival — Waymo, which originated as Google’s self-driving car project, and which counts Google owner Alphabet as its corporate owner.
Amazon has been working on its own autonomous vehicle technology projects, including its last-mile delivery robots, which are six-wheeled sidewalk-treading bots designed to carry small packages to customer homes. The company has also invested in autonomous driving startup Aurora, and it has tested self-driving trucks powered by self-driving freight startup Embark.
The Zoox acquisition is specifically aimed at helping the startup “bring their vision of autonomous ride-hailing to reality,” according to Amazon, so this doesn’t look to be immediately focused on Amazon’s logistics operations for package delivery. But Zoox’s ground-up technology, which includes developing zero-emission vehicles built specifically for autonomous use, could easily translate to that side of Amazon’s operations.
Meanwhile, if Zoox really does remain on course for passenger ride-hailing, that could open up a whole new market for Amazon — one which would put it head-to-head with Uber and Lyft once the autonomous driving technology matures.
One of China’s most valuable artificial intelligence chipmakers Cambricon is one step closer to its initial public offering, and its prospectus reveals a rare snapshot of where Chinese companies stand in relation to their international counterparts in this critical field.
Cambricon got the nod in early June to list on the Star Market, China’s new Nasdaq-like stock exchange conceived to attract high-potential tech startups. This week, the chipmaker received the final green light from the China Securities Regulatory Commission, the stock market watchdog, for its first-time sale.
The company is aiming to raise 2.8 billion yuan ($400 million) from its IPO and spend the proceeds on cloud-based algorithm training and inference, edge computing, and cash flow boost. It was last valued at 2.5 billion yuan in 2018 and expects its market cap to exceed 1.5 billion yuan when it floats.
Cambricon began life in a lab within the Chinese Academy of Sciences (CAS), the national institute for science and technology backed by government money. In 2016, the project spun out as a separate entity, making money by licensing intellectual property and selling chips for deep-learning acceleration. Before long, it had made its name as a major supplier of Huawei’s first AI chip-powered smartphones and other flagship models later on.
But the partners’ ties have weakened ever since Huawei began doubling down on its own semiconductor arm — HiSilicon — to hedge against U.S. sanctions. The direct consequence is a substantial revenue drop for Cambricon’s licensable IP, which slumped to an estimated 16-18 million yuan in 2018, down from 117 million yuan in 2018.
“Huawei Silicon has chosen to develop its own AI chips for end devices and has not extended the partnership with our company, and our AI chip business with other clients remains relatively small,” the company replied to regulators during the vetting process for its listing. Finding new clients at Huawei’s enormous scale is also challenging, as “most of the other well-known Chinese smartphone makers are using established handset chips and solutions from Qualcomm and MediaTek,” Cambricon noted.
The chipmaker also flagged that it remains “well behind” international competitors such as Nvidia, Intel, AMD in areas including “overall scale, capital reserve, resources for research and development and sales channels.” It’s also well aware of rising domestic competition from its old ally, Huawei, which has opted for chips from its home-grown HiSilicon unit.
Cambricon’s co-founders Chen Tianshi and Chen Yunji both hail from academia. The company still maintains close relationships with CAS and also works closely with Olivier Temam, a researcher at Inria, the French national institute for computer science and applied mathematics.
Cambricon is still operating in the red, adding up to a total loss of 1.6 billion yuan ($230 million) in the last three years in part due to large sums spent on research and development, according to its prospectus. It generated revenues of 444 million yuan ($63 million) in 2019, up from 7.84 million yuan in 2017.
The chipmaker is backed by a lineup of storied investors across the board. Besides the 41.7% stake Chen Tianshi commands, other shareholders include Zhongke Suanyuan, an asset management firm set up by CAS; Aixi Partners, an entity owned by Cambricon employees and controlled by Chen Tianshi; SDIC Venture Capital, a state-owned investment firm approved by China’s state council; e-commerce titan Alibaba; and voice recognition provider iFlytek.
The murder of George Floyd was shocking, but we know that his death was not unique. Too many Black lives have been stolen from their families and communities as a result of historical racism. There are deep and numerous threads woven into racial injustice that plague our country that have come to a head following the recent murders of George Floyd, Ahmaud Arbery and Breonna Taylor.
Just as important as the process underway to admit to and understand the origin of racial discrimination will be our collective determination to forge a more equitable and inclusive path forward. As we commit to address this intolerable and untenable reality, our discussions must include the role of artificial intelligence (AI) . While racism has permeated our history, AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine. In reality, AI is a mirror that reflects and magnifies the bias in our society.
I had the privilege of working with Deputy Attorney General Sally Yates to introduce implicit bias training to federal law enforcement at the Department of Justice, which I found to be as educational for those working on the curriculum as it was to those participating. Implicit bias is a fact of humanity that both facilitates (e.g., knowing it’s safe to cross the street) and impedes (e.g., false initial impressions based on race or gender) our activities. This phenomenon is now playing out at scale with AI.
As we have learned, law enforcement activities such as predictive policing have too often targeted communities of color, resulting in a disproportionate number of arrests of persons of color. These arrests are then logged into the system and become data points, which are aggregated into larger data sets and, in recent years, have been used to create AI systems. This process creates a feedback loop where predictive policing algorithms lead law enforcement to patrol and thus observe crime only in neighborhoods they patrol, influencing the data and thus future recommendations. Likewise, arrests made during the current protests will result in data points in future data sets that will be used to build AI systems.
This feedback loop of bias within AI plays out throughout the criminal justice system and our society at large, such as determining how long to sentence a defendant, whether to approve an application for a home loan or whether to schedule an interview with a job candidate. In short, many AI programs are built on and propagate bias in decisions that will determine an individual and their family’s financial security and opportunities, or lack thereof — often without the user even knowing their role in perpetuating bias.
This dangerous and unjust loop did not create all of the racial disparities under protest, but it reinforced and normalized them under the protected cover of a black box.
This is all happening against the backdrop of a historic pandemic, which is disproportionately impacting persons of color. Not only have communities of color been most at risk to contract COVID-19, they have been most likely to lose jobs and economic security at a time when unemployment rates have skyrocketed. Biased AI is further compounding the discrimination in this realm as well.
This issue has solutions: diversity of ideas and experience in the creation of AI. However, despite years of promises to increase diversity — particularly in gender and race, from those in tech who seem able to remedy other intractable issues (from putting computers in our pockets and connecting with machines outside the earth to directing our movements over GPS) — recently released reports show that at Google and Microsoft, the share of technical employees who are Black or Latinx rose by less than a percentage point since 2014. The share of Black technical workers at Apple has not changed from 6%, which is at least reported, as opposed to Amazon, which does not report tech workforce demographics.
In the meantime, ethics should be part of a computer science-related education and employment in the tech space. AI teams should be trained on anti-discrimination laws and implicit bias, emphasizing that negative impacts on protected classes and the real human impacts of getting this wrong. Companies need to do better in incorporating diverse perspectives into the creation of its AI, and they need the government to be a partner, establishing clear expectations and guardrails.
There have been bills to ensure oversight and accountability for biased data and the FTC recently issued thoughtful guidance holding companies responsible for understanding the data underlying AI, as well as its implications, and to provide consumers with transparent and explainable outcomes. And in light of the crucial role that federal support is playing and our accelerated use of AI, one of the most important solutions is to require assurance of legal compliance with existing laws from the recipients of federal relief funding employing AI technologies for critical uses. Such an effort was started recently by several members of Congress to safeguard protected persons and classes — and should be enacted.
We all must do our part to end the cycles of bias and discrimination. We owe it to those whose lives have been taken or altered due to racism to look within ourselves, our communities and our organizations to ensure change. As we increasingly rely on AI, we must be vigilant to ensure these programs are helping to solve problems of racial injustice, rather than perpetuate and magnify them.
Cape Privacy emerged from stealth today after spending two years building a platform for data scientists to privately share encrypted data. The startup also announced $2.95 million in new funding and $2.11 million in funding it got when the business launched in 2018 for a total of $5.06 million raised.
Boldstart Ventures and Version One led the round with participation from Haystack, Radical Ventures and Faktory Ventures.
Company CEO Ché Wijesinghe says that data science teams often have to deal with data sets that contain sensitive data and share data internally or externally for collaboration purposes. It creates a legal and regulatory data privacy conundrum that Cape Privacy is trying to solve.
“Cape Privacy is a collaboration platform designed to help focus on data privacy for data scientists. So the biggest challenge that people have today from a business perspective is managing privacy policies for machine learning and data science,” Wijesinghe told TechCrunch.
The product breaks down that problem into a couple of key areas. First of all it can take language from lawyers and compliance teams and convert that into code that automatically generates policies about who can see the different types of data in a given data set. What’s more, it has machine learning underpinnings so it also learns about company rules and preferences over time.
It also has a cryptographic privacy component. By wrapping the data with a cryptographic cypher, it lets teams share sensitive data in a safe way without exposing the data to people who shouldn’t be seeing it because of legal or regulatory compliance reasons.
“You can send something to a competitor as an example that’s encrypted, and they’re able to process that encrypted data without decrypting it, so they can train their model on encrypted data,” company co-founder and CTO Gavin Uhma explained.
The company closed the new round in April, which means they were raising in the middle of a pandemic, but it didn’t hurt that they had built the product already and were ready to go to market, and that Uhma and his co-founders had already built a successful startup, GoInstant that was acquired by Salesforce in 2012. (It’s worth noting that GoInstant debuted at TechCrunch Disrupt in 2011.)
Uhma and his team brought Wijesinghe on board to build the sales and marketing team because as a technical team, they wanted someone with go to market experience running the company, so they could concentrate on building product.
The company has 14 employees and are already an all remote team, so that the team didn’t have to adjust at all when the pandemic hit. While it plans to keep hiring fairly limited for the foreseeable future, the company has had a diversity and inclusion plan from the start.
“You have to be intentional about about seeking diversity, so it’s something that when we sit down and map out our hiring and work with recruiters in terms of our pipeline, we really make sure that that diversity is one of our objectives. You just have you have it as a goal, as part of your culture, and it’s something that when we see the picture of the team, we want to see diversity,” he said.
Wijesinghe adds, “As a person of color myself, I’m very sensitive to making sure that we have a very diverse team, not just from a color perspective, but a gender perspective as well,” he said.
The company is gearing up to sell the product and has paid pilots starting in the coming weeks.