FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Creating a prediction machine for the financial markets

By Walter Thompson
Fayze Bouaouid Contributor
Fayze Bouaouid is co-founder and CEO of financial intelligence application Springbox AI. He has a Master's of Science in Banking and Finance and nearly two decades of experience in banking and asset management.

Artificial intelligence and machine-learning technologies have evolved a lot over the past decade and have been useful to many people and businesses, especially in the realm of finance, banking, investment and trading.

In these industries, there are many activities that machines can perform better and faster than humans, such as calculations and financial reporting, as long as the machines are given the complete data.

The AI tools being built by humans today are becoming another level more robust in their ability to predict trends, provide complex analysis, and execute automations faster and cheaper than humans. However, there has not been an AI-powered machine built yet that can trade on its own.

There are many activities that machines can perform better and faster than humans, such as calculations and financial reporting, as long as the machines are given the complete data.

Even if it was possible to train such a system that could replace human judgment, there would still be a margin of error, as well as some things that are only understandable by human beings. Humans are still ultimately responsible for the design of AI-based prediction machines, and progress can only happen with their input.

Data is the backbone of any prediction machine

Building an AI-based prediction machine initially requires an understanding of the problem being solved and the requirements of the user. After that, it’s important to select the machine-learning technique that will be implemented, based on what the machine will do.

There are three techniques: supervised learning (learning from examples), unsupervised learning (learning to identify common patterns), and reinforcement learning (learning based on the concept of gamification).

After the technique is identified, it’s time to implement a machine-learning model. For “time series forecasting” — which involves making predictions about the future — long short-term memory (LSTM) with sequence to sequence (Seq2Seq) models can be used.

LSTM networks are especially suited to making predictions based on a series of data points indexed in time order. Even simple convolutional neural networks, applicable to image and video recognition, or recurrent neural networks, applicable to handwriting and speech recognition, can be used.

Spectral raises $6.2M for its DevSecOps service

By Frederic Lardinois

Tel Aviv-based Spectral is bringing its new DevSecOps code scanner out of stealth today and announcing a $6.2 million funding round. The startup’s programming language-agnostic service aims to automated code security development teams to help them detect potential security issues in their codebases and logs, for example. Those issues could be hardcoded API keys and other credentials, but also security misconfiguration and shadow IT assets.

The four-person founding team has a deep background in building AI, monitoring and security tools. CEO Dotan Nahum was a Chief Architect at Klarna and Conduit (now Como, though you may remember Conduit from its infamous toolbar that was later spun off), and the CTO at Como and HiredScore, for example. Other founders worked on building monitoring tools at Elastic and HP and on security at Akamai. As Nahum told me, the idea for Spectral came to him and co-founder and COO Idan Didi during their shared time at mobile application build Conduit/Como.

Image Credits: Spectral

“We basically stored certificates for every client that we had, so we could submit their apps to the various marketplaces,” Nahum told me of his experience at Counduit/Como. “That certificate really proves that you are who you are and it’s super sensitive. And at each point at these companies, I really didn’t have the right tools to actually make sure that we’re storing, handling, detecting [this information] and making sure that it doesn’t leak anywhere.”

Nahum decided to quit his current job and started to build a prototype to see if he could build a tool that could solve this problem (and his work on this prototype quickly discovered an issue at Slack). And as enterprises move from on-premises software to the cloud and to microservices and DevOps, the need for better DevSecOps tools is only increasing.

“The emphasis is to create a great developer experience,” Nahum noted. “Because that’s where we started from. We didn’t start as a top down cyber tool. We started as a modest DevOps friendly, developer-friendly tool.”

Image Credits: Spectral

One interesting aspect of Spectral’s approach, which uses a machine learning model to detect these breaches across programming languages, is that it also scans public-facing systems. On the backend, Spectral integrates with tools like Travis, Jenkins, CircleCI, Webpack, Gatsby and Netlify, but it can also monitor Slack, npm, maven and log providers — tools that most companies don’t really think about when they think about threat modeling.

“Our solution prevents security breaches on a daily basis,” said Spectral co-founder and COO Idan Didi. “The pain points we’re addressing resonate strongly across every company developing software, because as they evolve from own-code to glue-code to no-code approaches they allow their developers to gain more speed, but they also add on significant amounts of risk. Spectral lets developers be more productive while keeping the company secure.”

The company was founded in mid-2020, but it already has about 15 employees and counts a number of large publicly-listed companies among its customers.

TigerGraph raises $105M Series C for its enterprise graph database

By Frederic Lardinois

TigerGraph, a well-funded enterprise startup that provides a graph database and analytics platform, today announced that it has raised a $105 million Series C funding round. The round was led by Tiger Global and brings the company’s total funding to over $170 million.

“TigerGraph is leading the paradigm shift in connecting and analyzing data via scalable and native graph technology with pre-connected entities versus the traditional way of joining large tables with rows and columns,” said TigerGraph found and CEO, Yu Xu. “This funding will allow us to expand our offering and bring it to many more markets, enabling more customers to realize the benefits of graph analytics and AI.”

Current TigerGraph customers include the likes of Amgen, Citrix, Intuit, Jaguar Land Rover and UnitedHealth Group. Using a SQL-like query language (GSQL), these customers can use the company’s services to store and quickly query their graph databases. At the core of its offerings is the TigerGraphDB database and analytics platform, but the company also offers a hosted service, TigerGraph Cloud, with pay-as-you-go pricing, hosted either on AWS or Azure. With GraphStudio, the company also offers a graphical UI for creating data models and visually analyzing them.

The promise for the company’s database services is that they can scale to tens of terabytes of data with billions of edges. Its customers use the technology for a wide variety of use cases, including fraud detection, customer 360, IoT, AI, and machine learning.

Like so many other companies in this space, TigerGraph is facing some tailwind thanks to the fact that many enterprises have accelerated their digital transformation projects during the pandemic.

“Over the last 12 months with the COVID-19 pandemic, companies have embraced digital transformation at a faster pace driving an urgent need to find new insights about their customers, products, services, and suppliers,” the company explains in today’s announcement. “Graph technology connects these domains from the relational databases, offering the opportunity to shrink development cycles for data preparation, improve data quality, identify new insights such as similarity patterns to deliver the next best action recommendation.”

Dixa acquires Elevio, the ‘knowledge management’ platform helping brands improve customer support

By Steve O'Hear

Dixa, the Danish customer support platform promising more personalised customer support, has acquired Melbourne-based “knowledge management” SaaS Elevio to bolster its product and technology offerings.

The deal is said to be worth around $15 million, in a combination of cash and Dixa shares. This sees Elevio’s own VC investors exit, and Elevio’s founders and employees incentivised as part of the Dixa family, according to Dixa co-founder and CEO, Mads Fosselius.

“We have looked at many partners within this space over the years and ultimately decided to partner with Elevio as they have what we believe is the best solution in the market,” he tells me. “Dixa and Elevio have worked together since 2019 on several customers and great brands through a strong and tight integration between the two platforms. Dixa has also used Elevio’s products internally and to support our own customers for self service, knowledge base and help center”.

Fosselius says that this “close partnership, strong integration, unique tech” and a growing number of mutual customers eventually led to a discussion late last year, and the two companies decided to go on a journey together to “disrupt the world of customer service”.

“The acquisition comes with many interesting opportunities but it has been driven by a product/tech focus and is highly product and platform strategic for us,” he explains. “We long ago acknowledged that they have the best knowledge product in the market. We could have built our own knowledge management system but with such a strong product already out there, built with a similar tech stack as ours and with a very aligned vision and culture fit to Dixa, we felt this was a no brainer”.

Founded in 2015 by Jacob Vous Petersen and Mads Fosselius, Dixa wants to end bad customer service with the help of technology that claims to be able to facilitate more personalised customer support. Originally dubbed a “customer friendship” platform, the Dixa cloud-based software works across multiple channels — including phone, chat, e-mail, Facebook Messenger, WhatsApp and SMS — and employs a smart routing system so the right support requests reach the right people within an organisation.

Broadly speaking, the platform competes with Zendesk, Freshdesk and Salesforce. However, there’s also overlap with Intercom in relation to live chat and messaging, and perhaps MessageBird with its attempted expansion to become an “Omnichannel Platform-as-a-Service” (OPaaS) to easily enable companies to communicate with customers on any channel of their choosing.

Meanwhile, Elevio is described as bridging the gap between customer support and knowledge management. The platform helps support agents more easily access the right answers when communicating with customers, and simultaneously enables end-users to get information and guidance to resolve common issues for themselves.

Machine learning is employed so that the correct support content is provided based on a user’s query or on-going discussion, whilst also alerting customer support teams when documents need updating. The Australian company also claims that creating user guides using Elevio doesn’t require any technical skills and says its “embeddable assistant” enables support content to be delivered in-product or injected into any area of a website “without involving developers”.

Adds the Dixa CEO : “Customer support agents still spend a lot of time helping customers with the same type of questions over and over again. Together with Elevio we are able to ensure that agents are given the opportunity to quickly replicate best practice answers, ensuring fast, standardised and correct answers for customers. Elevio is the world leader in applying machine learning to solve this problem”.

Krisp nearly triples fundraise with $9M expansion after blockbuster 2020

By Devin Coldewey

Krisp, a startup that uses machine learning to remove background noise from audio in real time, has raised $9M as an extension of its $5M A round announced last summer. The extra money followed big traction in 2020 for the Armenian company, which grew its customers and revenue by more than an order of magnitude.

TechCrunch first covered Krisp when it was just emerging from UC Berkeley’s Skydeck accelerator, and founder Davit Baghdasaryan was relatively freshly out of his previous role at Twilio. The company’s pitch when I chatted with them in the shared office back then was simple, and remains the core of what they offer: isolation of the human voice from any background noise (including other voices) so that audio contains only the former.

It probably comes as no surprise, then, that the company appears to have benefited immensely from the shift to virtual meetings and other trends accelerated by the pandemic. To be specific, Baghdasaryan told me that 2020 brought the company a 20x increase in active users, a 23x increase in enterprise accounts, and 13x improvement of annual recurring revenue.

The rise in virtual meetings — often in noisy places like, you know, homes — has led to significant uptake across multiple industries. Krisp now has more than 1,200 enterprise customers, Baghdasaryan said: banks, HR platforms, law firms, call centers — anyone who benefits from having a clear voice on the line (“I guess any company qualifies,” he added). Enterprise-oriented controls like provisioning and central administration have been added to make it easier to integrate.

Illustration of six people using a video chat app.

Image Credits: Krisp

B2B revenue recently eclipsed B2C; the latter was likely popularized by Krisp’s inclusion as an option in popular gaming (and increasingly beyond) chat app Discord, though of course users of a free app being given a bonus product for free aren’t always big converters to “pro” tiers of a product.

But the company hasn’t been standing still, either. While it began with a simple feature set (turning background noise on and off, basically) Krisp has made many upgrades to both its product and infrastructure.

Noise cancellation for high-fidelity voice channels makes the software useful for podcasters and streamers, and acoustic correction (removing room echos) simplifies those setups quite a bit as well. Considering the amount of people doing this and the fact that they’re often willing to pay, this could be a significant source of income.

The company plans to add cross-service call recording and tracking; since it sits between the system’s sound drivers and the application, Krisp can easily save the audio and other useful metadata (How often did person A talk vs person B? What office locations are noisiest?). And the addition of voice cancellation — other people’s voices, that is — could be a huge benefit for people who work, or anticipate returning to work, in crowded offices and call centers.

Part of Krisp’s allure is the ability to run locally and securely on many platforms with very low overhead. But companies with machine learning based products can stagnate quickly if they don’t improve their infrastructure or build more efficient training flows — Lengoo, for instance, is taking on giants in the translation industry with better training as more or less its main advantage.

Krisp has been optimizing and re-optimizing its algorithms to run efficiently on both Intel and ARM architectures, and decided to roll its own servers instead of renting from the usual suspects.

“AWS, Azure and Google Cloud turned out to be too expensive,” Baghdasaryan said. “We have invested in building a datacenter with Nvidia’s latest A100s in them. This will make our experimentation faster, which is crucial for ML companies.”

Baghdasaryan was also emphatic in his satisfaction with the team in Armenia, where he’s from and where the company has focused its hiring, including the 25-strong R&D crew. “By the end of 2021 it will be a 45 member team, all in Armenia,” he said. “We are super happy with the math, physics and engineering talent pool there.”

The funding amounts to $14M if you combine the two disparate parts of the A round, the latter of which was agreed to just three months after the first. That’s a lot of money, of course, but may seem relatively modest for a company with a thousand enterprise customers and revenue growing by more than 2,000 percent year-over-year.

Baghdasaryan said they just weren’t ready to take on a whole B round, with all that involves. They do plan a new fundraise later this year when they’ve reached $15M ARR, a goal that seems perfectly reasonable given their current charts.

Of course startups with this kind of growth tend to get snapped up by larger concerns, but despite a few offers Baghdasaryan says he’s in it for the long haul — and a multi-billion dollar market.

The rush to embrace the new virtual work economy may have spurred Krisp’s growth spurt, but it’s clear that neither the company nor the environment that let it thrive are going anywhere.

Notable Health seeks to improve COVID-19 vaccine administration through intelligent automation

By Sophie Burkholder

Efficient and cost-effective vaccine distribution remains one of the biggest challenges of 2021, so it’s no surprise that startup Notable Health wants to use their automation platform to help. Initially started to help address the nearly $250 billion annual administrative costs in healthcare, Notable Health launched in 2017 to use automation to replace time-consuming and repetitive simple tasks in health industry admin. In early January of this year, they announced plans to use that technology as a way to help manage vaccine distribution.

“As a physician, I saw firsthand that with any patient encounter, there are 90 steps or touchpoints that need to occur,” said Notable Health medical director Muthu Alagappan in an interview. “It’s our hypothesis that the vast majority of those points can be automated.”

Notable Health’s core technology is a platform that uses robotic process automation (RPA), natural language processing (NLP), and machine learning to find eligible patients for the COVID-19 vaccine. Combined with data provided by hospital systems’ electronic health records, the platform helps those qualified to receive the vaccine set up appointments and guides them to other relevant educational resources.

“By leveraging intelligent automation to identify, outreach, educate and triage patients, health systems can develop efficient and equitable vaccine distribution workflows,” said Notable Health strategic advisor and Biden Transition COVID-19 Advisory Board Member Dr. Ezekiel Emanuel, in a press release.

Making vaccine appointments has been especially difficult for older Americans, many of whom have reportedly struggled with navigating scheduling websites. Alagappan sees that as a design problem. “Technology often gets a bad reputation, because it’s hampered by the many bad technology experiences that are out there,” he said.

Instead, he thinks Notable Health has kept the user in mind through a more simplified approach, asking users only for basic and easy-to-remember information through a text message link. “It’s that emphasis on user-centric design that I think has allowed us to still have really good engagement rates even with older populations,” he said.

While the startup’s platform will likely help hospitals and health systems develop a more efficient approach to vaccinations, its use of RPA and NLP holds promise for future optimization in healthcare. Leaders of similar technology in other industries have already gone on to have multi-billion dollar valuations, and continue to attract investors’ interest.

Artificial intelligence is expected to grow in healthcare over the next several years, but Alagappan argues that combining that with other, more readily available intelligent technologies is also an important step towards improved care. “When we say intelligent automation, we’re really referring to the marriage of two concepts: artificial intelligence—which is knowing what to do—and robotic process automation—which is knowing how to do it,” he said. That dual approach is what he says allows Notable Health to bypass administrative bottlenecks in healthcare, instructing bots to carry out those tasks in an efficient and adaptable way.

So far, Notable Health has worked with several hospital systems across multiple states in using their platform for vaccine distribution and scheduling, and are now using the platform to reach out to tens of thousands of patients per day.

LA-based Metropolis raises $41 million to upgrade parking infrastructure

By Jonathan Shieber

Metropolis is a new Los Angeles-based startup that’s looking to compete with BMW-owned ParkMobile for a slice of the automated parking lot management market.

Upgrading ParkMobile’s license plate-based service with a computer vision based system that recognizes cars as they enter and leave garages has been Metropolis’ mission since founder and chief executive Alex Israel first formed the business back in 2017.

Israel, a serial entrepreneur, has spent decades thinking about parking. His last company, ParkMe, was sold to Inrix back in 2015. And it was with those earnings and experience that Israel went back to the drawing board to develop a new kind of parking payment and management service.

Now, the company is ready for its closeup, announcing not only its launch, but $41 million in financing the company raised from investors including the real estate managers Starwood and RXR Realty; Dick Costolo’s 01 Advisors; Dragoneer; former Facebook employees Sam Lessin and Kevin Colleran’s Slow Ventures; Dan Doctoroff, the head of Alphabet’s Sidewalk Labs initiative; and NBA All star and early stage investor, Baron Davis. 

According to Alex Israel, the parking payment application is the foundation for a bigger business empire that hopes to reimagine parking spaces as hubs for a broad array of urban mobility services.

In this, the company’s goals aren’t dissimilar from the Florida-based startup, REEF, which has its own spin on what to do with the existing infrastructure and footprint created by urban parking spaces. And REEF’s $700 million round of funding from last year shows there’s a lot of money to be made — or at least spent — in a parking lot.

Unlike REEF, Metropolis will remain focused on mobility, according to Israel. “How does parking change over the next 20 years as mobility shifts?” he asked. And he’s hoping that Metropolis will provide an answer. 

The company is hoping to use its latest funding to expand its footprint to over 600 locations over the course of the next year. In all, Metropolis has raised $60 million since it was formed back in 2017.

While the computer vision and machine learning technology will serve as the company’s beachhead into parking lots, services like cleaning, charging, storage and logistics could all be part and parcel of the Metropolis offering going forward, Israel said. “We become the integrator [and] we also in some cases become the direct service provider,” Israel said.

The company already has 10,000 parking spots that it’s managing for big real estate owners, and Israel expects more property managers to flood to its service.

“[Big property owners] are not thinking about the infrastructure requirements that allow for the seamless access to these facilities,” Israel said. His technology can allow buildings to capture more value through other services like dynamic pricing and yield optimization as well.

“Metropolis is finding the highest and best use whether that be scooter charging, scooter storage, fleet storage, fleet logistics, or sorting,” Israel said.  

 

ETH spin-off LatticeFlow raises $2.8M to help build trustworthy AI systems

By Frederic Lardinois

LatticeFlow, an AI startup that was spun out of ETH Zurich in 2020, today announced that it has raised a $2.8 million seed funding round led by Swiss deep-tech fund btov and Global Founders Capital, which previously backed the likes of Revolut, Slack and Zalando.

The general idea behind LatticeFlow is to build tools that help AI teams build and deploy AI models that are safe, reliable and trustworthy. The problem today, the team argues, is that models get very good at finding the right statistical patterns to hit a given benchmark. That makes them inflexible, though, since these models were optimized for accuracy in a lab setting, not for robustness in the real world.

“One of the most commonly used paradigms for evaluating machine learning models is just aggregate metrics, like accuracy. And, of course, this is a super coarse representation of how good a model really is,” Pavol Bielik, the company’s CTO explained. “What we want to do is, we provide systematic ways of monitoring models, assessing their reliability across different relevant data slices and then also provide tools for improving these models.”

Image Credits: LatticeFlow

Building these kinds of models that are more flexible yet still provide robust results will take a new arsenal of tools, though, as well as the right team with deep expertise in these areas. Clearly, though, this is a founding team with the right background. In addition to CTO Bielik, the founding team includes Petar Tsankov, the company’s CEO and former senior researcher and lecturer at ETH Zurich, as well as ETH professors Martin Vechev, who leads the Secure, Reliable and Intelligence Systems lab at ETH, and Andreas Krause, who leads ETH’s Learning & Adaptive Systems lab. Tsankov’s last startup, DeepCode, was acquired by cybersecurity firm Snyk in 2020.

It’s also worth noting that Vechev, who previously co-founded ETH spin-off ChainSecurity, and his group at ETH previously developed ERAN, a verifier for large deep learning models with millions of parameters, that last year won the first competition for certifying deep neural networks. While the team was already looking at creating a company before winning this competition, Vechev noted that gave the team the confirmation that it was on the right path.

Image Credits: LatticeFlow

“We want to solve the main AI problem, which is making AI usable. This is the overarching goal,” Vechev told me. “[…] I don’t think you can actually found the company just purely based on the certification work. I think the kinds of skills that people have in the company, my group, Andreas [Krause]’s group, they all complement each other and cover a huge space, which I think is very, very unique. I don’t know of other companies who have covered this range of skills in these pressing points and have done groundbreaking work before.”

LatticeWorks already has a set of pilot customers who are trialing its tools. These include Swiss railways (SBB), which is using it to build a tool for automatic rail inspections, Germany’s Federal Cyber Security Bureau and the U.S. Army. The team is also working with other large enterprises that are using its tools to improve their computer vision models.

“Machine Learning (ML) is one of the core topics at SBB, as we see a huge potential in its application for an improved, intelligent and automated monitoring of our railway infrastructure,” said Dr. Ilir Fetai and Andre Roger, the leads of SBB’s AI team. “The project on robust and reliable AI with LatticeFlow, ETH, and Siemens has a crucial role in enabling us to fully exploit the advantages of using ML.”

For now, LatticeFlow remains in early access. The team plans to use the funding to accelerate its product development and bring on new customers. The team also plans to build out a presence in the U.S. in the near future.

K Health expands into virtual childcare and raises $132 million at a $1.5 billion valuation

By Jonathan Shieber

K Health, the virtual healthcare provider that uses machine learning to lower the cost of care by providing the bulk of the company’s health assessments, is launching new tools for childcare on the heels of raising cash that values the company at $1.5 billion.

The $132 million round raised in December will help the company expand and help pay for upgrades including an integration with most electronic health records — an integration that’s expected by the second quarter.

Throughout 2020 K Health has leveraged its position operating at the intersection of machine learning and consumer healthcare to raised $222 million in a single year.

This appetite from investors shows how large the opportunity is in consumer healthcare as companies look to use technology to make care more affordable.

For K Health, that means a monthly subscription to its service of $9 for unlimited access to the service and physicians on the platform, as well as a $19 per-month virtual mental health offering and a $19 fee for a one-time urgent care consultation.

To patients and investors the pitch is that the data K Health has managed to acquire through partnerships with organizations like the Israel health maintenance organization Maccabi Healthcare Services, which gave up decades of anonymized data on patients and health outcomes to train K Health’s predictive algorithm, can assess patients and aid the in diagnoses for the company’s doctors.

In theory that means the company’s service essentially acts as a virtual primary care physician, holding a wealth of patient information that, when taken together, might be able to spot underlying medical conditions faster or provide a more holistic view into patient care.

For pharmaceutical companies that could mean insights into population health that could be potentially profitable avenues for drug discovery.

In practice, patients get what they pay for.

The company’s mental health offering uses medical doctors who are not licensed psychiatrists to perform their evaluations and assessments, according to one provider on the platform, which can lead to interactions with untrained physicians that can cause more harm than good.

While company chief executive Allon Bloch is likely correct in his assessment that most services can be performed remotely (Bloch puts the figure at 90%), they should be performed remotely by professionals who have the necessary training.

There are limits to how much heavy lifting an algorithm or a generalist should do when it comes to healthcare, and it appears that K Health wants to push those limits.

“Drug referrals, acute issues, prevention issues, most of those can be done remotely,” Bloch said. “There’s an opportunity to do much better and potentially cheaper. 

K Health has already seen hundreds of thousands of patients either through its urgent care offering or its subscription service and generated tens of millions in revenue in 2020, according to Bloch. He declined to disclose how many patients used the urgent care service vs. the monthly subscription offering.

Telemedicine companies, like other companies providing services remotely, have thrived during the pandemic. Teladoc and Amwell, two of the early pioneers in virtual medicine have seen their share prices soar. Companies like Hims, that provide prescriptions for elective conditions that aren’t necessarily covered by health, special purpose acquisition companies at valuations of $1.6 billion.

Backing K Health are a group of investors led by GGV Capital and Valor Equity Partners. Kaiser Permanente’s pension fund and the investment offices of the owners of 3G Capital (the Brazilian investment firm that owns Burger King and Kraft Heinz), along with 14W, Max Ventures, Pico Partners, Marcy Venture Partners, Primary Venture Partners and BoxGroup, also participated in the round. 

Organizations working with the company include Maccabi Healthcare; the Mayo Clinic, which is investigating virtual care models with the company; and Anthem, which has white labeled the K Health service and provides it to some of the insurer’s millions of members.

Madrona promotes Anu Sharma and Daniel Li as Partners

By Lucas Matney

Fresh off the announcement of more than $500 million in new capital across two new funds, Seattle-based Madrona Venture Group has announced that they’re adding Anu Sharma and Daniel Li to the team’s list of Partners.

The firm, which in recent years has paid particularly close attention to enterprise software bets, invests heavily in the early-stage Pacific Northwest startup scene.

Both Li and Sharma are stepping into the Partner role after some time at the firm. Li has been with Madrona for five years while Sharma joined the team in 2020. Prior to joining Madrona, Sharma led product management teams at Amazon Web Services, worked as a software developer at Oracle and had a stint in VC as an associate at SoftBank China & India. Li previously worked at the Boston Consulting Group.

I got the chance to catch up with Li who notes that the promotion won’t necessarily mean a big shift in his day-to-day responsibilities — “At Madrona, you’re not promoted until you’re working in the next role anyway,” he says — but that he appreciates “how much trust the firm places in junior investors.”

Asked about leveling up his venture career during a time when public and private markets seem particularly flush with cash, Li acknowledges some looming challenges.

“On one hand, it’s just been an amazing five years to join venture capital because things have just been up and to the right with lots of things that work; it’s just a super exciting time,” Li says. “On the other hand, from a macro perspective, you know that there’s more capital flowing into VC as an asset class than ever before. And just from that pure macro perspective, you know that that means returns are going to be lower in the next 10 years as valuations are higher.”

Nevertheless, Li is plenty bullish on internet companies claiming larger swaths of the global GDP and hopes to invest specifically in “low code platforms, next-gen productivity, and online communities,” Madrona notes in their announcement, while Sharma plans to continue looking at to “distributed systems, data infrastructure, machine learning, and security.”

TechCrunch recently talked to Li and his Madrona colleague Hope Cochran about some of the top trends in social gaming and how investors were approaching new opportunities across the gaming industry.

Facial recognition reveals political party in troubling new research

By Devin Coldewey

Researchers have created a machine learning system that they claim can determine a person’s political party, with reasonable accuracy, based only on their face. The study, from a group that also showed that sexual preference can seemingly be inferred this way, candidly addresses and carefully avoids the pitfalls of “modern phrenology,” leading to the uncomfortable conclusion that our appearance may express more personal information that we think.

The study, which appeared this week in the Nature journal Scientific Reports, was conducted by Stanford University’s Michal Kosinski. Kosinski made headlines in 2017 with work that found that a person’s sexual preference could be predicted from facial data.

The study drew criticism not so much for its methods but for the very idea that something that’s notionally non-physical could be detected this way. But Kosinski’s work, as he explained then and afterwards, was done specifically to challenge those assumptions and was as surprising and disturbing to him as it was to others. The idea was not to build a kind of AI gaydar — quite the opposite, in fact. As the team wrote at the time, it was necessary to publish in order to warn others that such a thing may be built by people whose interests went beyond the academic:

We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.

We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.

Similar warnings may be sounded here, for while political affiliation at least in the U.S. (and at least at present) is not as sensitive or personal an element as sexual preference, it is still sensitive and personal. A week hardly passes without reading of some political or religious “dissident” or another being arrested or killed. If oppressive regimes could obtain what passes for probable cause by saying “the algorithm flagged you as a possible extremist,” instead of for example intercepting messages, it makes this sort of practice that much easier and more scalable.

The algorithm itself is not some hyper-advanced technology. Kosinski’s paper describes a fairly ordinary process of feeding a machine learning system images of more than a million faces, collected from dating sites in the U.S., Canada and the U.K., as well as American Facebook users. The people whose faces were used identified as politically conservative or liberal as part of the site’s questionnaire.

The algorithm was based on open-source facial recognition software, and after basic processing to crop to just the face (that way no background items creep in as factors), the faces are reduced to 2,048 scores representing various features — as with other face recognition algorithms, these aren’t necessary intuitive things like “eyebrow color” and “nose type” but more computer-native concepts.

Chart showing how faces are cropped and reduced to neural network representations.

Image Credits: Michal Kosinski / Nature Scientific Reports

The system was given political affiliation data sourced from the people themselves, and with this it diligently began to study the differences between the facial stats of people identifying as conservatives and those identifying as liberal. Because it turns out, there are differences.

Of course it’s not as simple as “conservatives have bushier eyebrows” or “liberals frown more.” Nor does it come down to demographics, which would make things too easy and simple. After all, if political party identification correlates with both age and skin color, that makes for a simple prediction algorithm right there. But although the software mechanisms used by Kosinski are quite standard, he was careful to cover his bases in order that this study, like the last one, can’t be dismissed as pseudoscience.

The most obvious way of addressing this is by having the system make guesses as to the political party of people of the same age, gender and ethnicity. The test involved being presented with two faces, one of each party, and guessing which was which. Obviously chance accuracy is 50%. Humans aren’t very good at this task, performing only slightly above chance, about 55% accurate.

The algorithm managed to reach as high as 71% accurate when predicting political party between two like individuals, and 73% presented with two individuals of any age, ethnicity or gender (but still guaranteed to be one conservative, one liberal).

Image Credits: Michal Kosinski / Nature Scientific Reports

Getting three out of four may not seem like a triumph for modern AI, but considering people can barely do better than a coin flip, there seems to be something worth considering here. Kosinski has been careful to cover other bases as well; this doesn’t appear to be a statistical anomaly or exaggeration of an isolated result.

The idea that your political party may be written on your face is an unnerving one, for while one’s political leanings are far from the most private of info, it’s also something that is very reasonably thought of as being intangible. People may choose to express their political beliefs with a hat, pin or t-shirt, but one generally considers one’s face to be nonpartisan.

If you’re wondering which facial features in particular are revealing, unfortunately the system is unable to report that. In a sort of para-study, Kosinski isolated a couple dozen facial features (facial hair, directness of gaze, various emotions) and tested whether those were good predictors of politics, but none led to more than a small increase in accuracy over chance or human expertise.

“Head orientation and emotional expression stood out: Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust,” Kosinski wrote in author’s notes for the paper. But what they added left more than 10 percentage points of accuracy not accounted for: “That indicates that the facial recognition algorithm found many other features revealing political orientation.”

The knee-jerk defense of “this can’t be true — phrenology was snake oil” doesn’t hold much water here. It’s scary to think it’s true, but it doesn’t help us to deny what could be a very important truth, since it could be used against people very easily.

As with the sexual orientation research, the point here is not to create a perfect detector for this information, but to show that it can be done in order that people begin to consider the dangers that creates. If for example an oppressive theocratic regime wanted to crack down on either non-straight people or those with a certain political leaning, this sort of technology gives them a plausible technological method to do so “objectively.” And what’s more, it can be done with very little work or contact with the target, unlike digging through their social media history or analyzing their purchases (also very revealing).

We have already heard of China deploying facial recognition software to find members of the embattled Uyghur religious minority. And in our own country, this sort of AI is trusted by authorities as well — it’s not hard to imagine police using the “latest technology” to, for instance, classify faces at a protest, saying “these 10 were determined by the system as being the most liberal,” or what have you.

The idea that a couple researchers using open-source software and a medium-sized database of faces (for a government, this is trivial to assemble in the unlikely possibility they do not have one already) could do so anywhere in the world, for any purpose, is chilling.

“Don’t shoot the messenger,” said Kosinski. “In my work, I am warning against widely used facial recognition algorithms. Worryingly, those AI physiognomists are now being used to judge people’s intimate traits – scholars, policymakers, and citizens should take notice.”

Intuitive Machines taps SpaceX for second lunar lander mission

By Darrell Etherington

The first commercial lunar landers are set to start making their trips to the moon as early as this year, and now another one has a confirmed ride booked: Intuitive Machines is sending its second lander aboard a SpaceX Falcon 9, with a projected launch time frame happening sometime around 2022 at the earliest. Intuitive Machines has already booked a first lander mission via SpaceX, which is also hosting payloads for other private companies seeking to make lunar landfall under NASA’s Commercial Lunar Payload Services (CLPS) program.

Intuitive Machines’ Nova-C lander can carry up to 100 kg (around 222 lbs) of cargo to the moon’s surface, and can communicate back to Earth for transmitting the results of its missions. It has both internal and surface-mounting capacity, and will carry science experiments for a variety of customers to the lunar surface through NASA’s commercial partnership program, partly to support future NASA missions including its planned Artemis human moon landings.

The first Intuitive Machines lunar lander mission, which will also use a Nova-C lander, is set to take place sometime in the fourth quarter of 2021 based on current timelines. It’ll include a lunar imaging suite, which will seek to “capture some of the first images of the Milky Way Galaxy Center from the surface of the Moon,” and the second mission will include delivering a polar resource mining drill and a mass spectrometer to the moon’s south pole on behalf of NASA, in addition to other payloads.

Extra Crunch roundup: 2 VC surveys, Tesla’s melt up, The Roblox Gambit, more

By Walter Thompson

This has been quite a week.

Instead of walking backward through the last few days of chaos and uncertainty, here are three good things that happened:

  • Google employee Sara Robinson combined her interest in machine learning and baking to create AI-generated hybrid treats.
  • A breakthrough could make water desalination 30%-40% more effective.
  • Bianca Smith will become the first Black woman to coach a professional baseball team.

Despite many distractions in our first full week of the new year, we published a full slate of stories exploring different aspects of entrepreneurship, fundraising and investing.

We’ve already gotten feedback on this overview of subscription pricing models, and a look back at 2020 funding rounds and exits among Israel’s security startups was aimed at our new members who live and work there, along with international investors who are seeking new opportunities.

Plus, don’t miss our first investor surveys of 2021: one by Lucas Matney on social gaming, and another by Mike Butcher that gathered responses from Portugal-based investors on a wide variety of topics.

Thanks very much for reading Extra Crunch this week. I hope we can all look forward to a nice, boring weekend with no breaking news alerts.

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist


Full Extra Crunch articles are only available to members
Use discount code ECFriday to save 20% off a one- or two-year subscription


The Roblox Gambit

In February 2020, gaming platform Roblox was valued at $4 billion, but after announcing a $520 million Series H this week, it’s now worth $29.5 billion.

“Sure, you could argue that Roblox enjoyed an epic 2020, thanks in part to COVID-19,” writes Alex Wilhelm this morning. “That helped its valuation. But there’s a lot of space between $4 billion and $29.5 billion.”

Alex suggests that Roblox’s decision to delay its IPO and raise an enormous Series H was a grandmaster move that could influence how other unicorns will take themselves to market. “A big thanks to the gaming company for running this experiment for us.”

I asked him what inspired the headline; like most good ideas, it came to him while he was trying to get to sleep.

“I think that I had ‘The Queen’s Gambit’ somewhere in my head, so that formed the root of a little joke with myself. Roblox is making a strategic wager on method of going public. So, ‘gambit’ seems to fit!”

8 investors discuss social gaming’s biggest opportunities

girl playing games on desktop computer

Image Credits: Erik Von Weber (opens in a new window) / Getty Images

For our first investor survey of the year, Lucas Matney interviewed eight VCs who invest in massively multiplayer online games to discuss 2021 trends and opportunities:

  • Hope Cochran, Madrona Venture Group
  • Daniel Li, Madrona Venture Group
  • Niko Bonatsos, General Catalyst
  • Ethan Kurzweil, Bessemer Venture Partners
  • Sakib Dadi, Bessemer Venture Partners
  • Jacob Mullins, Shasta Ventures
  • Alice Lloyd George, Rogue
  • Gigi Levy-Weiss, NFX

Having moved far beyond shooters and sims, platforms like Twitch, Discord and Fortnite are “where culture is created,” said Daniel Li of Madrona.

Rep. Alexandria Ocasio-Cortez uses Twitch to explain policy positions, major musicians regularly perform in-game concerts on Fortnite and in-game purchases generated tens of billions last year.

“Gaming is a unique combination of science and art, left and right brain,” said Gigi Levy-Weiss of NFX. “It’s never just science (i.e., software and data), which is why many investors find it hard.”

How to convert customers with subscription pricing

Giant hand and magnet picking up office and workers

Image Credits: C.J. Burton (opens in a new window) / Getty Images

Startups that lack insight into their sales funnel have high churn, low conversion rates and an inability to adapt or leverage changes in customer behavior.

If you’re hoping to convert and retain customers, “reinforcing your value proposition should play a big part in every level of your customer funnel,” says Joe Procopio, founder of Teaching Startup.

What is up with Tesla’s value?

Elon Musk, founder of SpaceX and chief executive officer of Tesla Inc., arrives at the Axel Springer Award ceremony in Berlin, Germany, on Tuesday, Dec. 1, 2020. Tesla Inc. will be added to the S&P 500 Index in one shot on Dec. 21, a move that will ripple through the entire market as money managers adjust their portfolios to make room for shares of the $538 billion company. Photographer: Liesa Johannssen-Koppitz/Bloomberg via Getty Images

Image Credits: Bloomberg (opens in a new window) / Getty Images

Alex Wilhelm followed up his regular Friday column with another story that tries to find a well-grounded rationale for Tesla’s sky-high valuation of approximately $822 billion.

Meanwhile, GM just unveiled a new logo and tagline.

As ever, I learned something new while editing: A “melt up” occurs when investors start clamoring for a particular company because of acute FOMO (the fear of missing out).

Delivering 500,000 cars in 2020 was “impressive,” says Alex, who also acknowledged the company’s ability to turn GAAP profits, but “pride cometh before the fall, as does a melt up, I think.”

Note: This story has Alex’s original headline, but I told him I would replace the featured image with a photo of someone who had very “richest man in the world” face.

How Segment redesigned its core systems to solve an existential scaling crisis

Abstract glowing grid and particles

Image Credits: piranka / Getty Images

On Tuesday, enterprise reporter Ron Miller covered a major engineering project at customer data platform Segment called “Centrifuge.”

“Its purpose was to move data through Segment’s data pipes to wherever customers needed it quickly and efficiently at the lowest operating cost,” but as Ron reports, it was also meant to solve “an existential crisis for the young business,” which needed a more resilient platform.

Dear Sophie: Banging my head against the wall understanding the US immigration system

Image Credits: Sophie Alcorn

Dear Sophie:

Now that the U.S. has a new president coming in whose policies are more welcoming to immigrants, I am considering coming to the U.S. to expand my company after COVID-19. However, I’m struggling with the morass of information online that has bits and pieces of visa types and processes.

Can you please share an overview of the U.S. immigration system and how it works so I can get the big picture and understand what I’m navigating?

— Resilient in Romania

The first “Dear Sophie” column of each month is available on TechCrunch without a paywall.

Revenue-based financing: The next step for private equity and early-stage investment

Shot of a group of people holding plants growing out of soil

Image Credits: Hiraman (opens in a new window) / Getty Images

For founders who aren’t interested in angel investment or seeking validation from a VC, revenue-based investing is growing in popularity.

To gain a deeper understanding of the U.S. RBI landscape, we published an industry report on Wednesday that studied data from 134 companies, 57 funds and 32 investment firms before breaking out “specific verticals and business models … and the typical profile of companies that access this form of capital.”

Lisbon’s startup scene rises as Portugal gears up to be a European tech tiger

Man using laptop at 25th of April Bridge in Lisbon, Portugal

Image Credits: Westend61 (opens in a new window)/ Getty Images

Mike Butcher continues his series of European investor surveys with his latest dispatch from Lisbon, where a nascent startup ecosystem may get a Brexit boost.

Here are the Portugal-based VCs he interviewed:

  • Cristina Fonseca, partner, Indico Capital Partners
  • Pedro Ribeiro Santos, partner, Armilar Venture Partners
  • Tocha, partner, Olisipo Way
  • Adão Oliveira, investment manager, Portugal Ventures
  • Alexandre Barbosa, partner, Faber
  • António Miguel, partner, Mustard Seed MAZE
  • Jaime Parodi Bardón, partner, impACT NOW Capital
  • Stephan Morais, partner, Indico Capital Partners
  • Gavin Goldblatt, managing partner, Portugal Gateway

How late-stage edtech companies are thinking about tutoring marketplaces

Life Rings flying out beneath storm clouds are a metaphor for rescue, help and aid.

Image Credits: John Lund (opens in a new window)/ Getty Images

How do you scale online tutoring, particularly when demand exceeds the supply of human instructors?

This month, Chegg is replacing its seven-year-old marketplace that paired students with tutors with a live chatbot.

A spokesperson said the move will “dramatically differentiate our offerings from our competitors and better service students,” but Natasha Mascarenhas identified two challenges to edtech automation.

“A chatbot won’t work for a student with special needs or someone who needs to be handheld a bit more,” she says. “Second, speed tutoring can only work for a specific set of subjects.”

Decrypted: How bad was the US Capitol breach for cybersecurity?

Image Credits: Treedeo (opens in a new window) / Getty Images

While I watched insurrectionists invade and vandalize the U.S. Capitol on live TV, I noticed that staffers evacuated so quickly, some hadn’t had time to shut down their computers.

Looters even made off with a laptop from Senator Jeff Merkley’s office, but according to security reporter Zack Whittaker, the damages to infosec wasn’t as bad as it looked.

Even so, “the breach will likely present a major task for Congress’ IT departments, which will have to figure out what’s been stolen and what security risks could still pose a threat to the Capitol’s network.”

Extra Crunch’s top 10 stories of 2020

On New Year’s Eve, I made a list of the 10 “best” Extra Crunch stories from the previous 12 months.

My methodology was personal: From hundreds of posts, these were the 10 I found most useful, which is my key metric for business journalism.

Some readers are skeptical about paywalls, but without being boastful, Extra Crunch is a premium product, just like Netflix or Disney+. I know, we’re not as entertaining as a historical drama about the reign of Queen Elizabeth II or a space western about a bounty hunter. But, speaking as someone who’s worked at several startups, Extra Crunch stories contain actionable information you can use to build a company and/or look smart in meetings — and that’s worth something.

SilviaTerra wants to bring the benefits of carbon offsets to every landowner everywhere

By Jonathan Shieber

Zack Parisa and Max Nova, the co-founders of the carbon offset company SilivaTerra, have spent the last decade working on a way to democratize access to revenue generating carbon offsets.

As forestry credits become a big, booming business on the back of multi-billion dollar commitments from some of the world’s biggest companies to decarbonize their businesses, the kinds of technologies that the two founders have dedicated ten years of their lives to building are only going to become more valuable.

That’s why their company, already a profitable business, has raised $4.4 million in outside funding led by Union Square Ventures and Version One Ventures, along with Salesforce founder and the driving force between the 1 trillion trees initiative, Marc Benioff .

“Key to addressing the climate crisis is changing the balance in the so-called carbon cycle. At present, every year we are adding roughly 5 gigatons of carbon to the atmosphere*. Since atmospheric carbon acts as a greenhouse gas this increases the energy that’s retained rather than radiated back into space which causes the earth to heat up,” writes Union Square Ventures managing partner Albert Wenger in a blog post. “There will be many ways such drawdown occurs and we will write about different approaches in the coming weeks (such as direct air capture and growing kelp in the oceans). One way that we understand well today and can act upon immediately are forests. The world’s forests today absorb a bit more than one gigatons of CO2 per year out of the atmosphere and turn it into biomass. We need to stop cutting and burning down existing forests (including preventing large scale forest fires) and we have to start planting more new trees. If we do that, the total potential for forests is around 4 to 5 gigatons per year (with some estimates as high as 9 gigatons).”

For the two founders, the new funding is the latest step in a long journey that began in the woods of Northern Alabama, where Parisa grew up.

After attending Mississippi State for forestry, Parisa went to graduate school at Yale, where he met Louisville, Kentucky native Max Nova, a computer science student who joined with Parisa to set up the company that would become SiliviaTerra.

SilviaTerra co-founders Max Nova and Zack Parisa. Image Credit: SilivaTerra

The two men developed a way to combine satellite imagery with field measurements to determine the size and species of trees in every acre of forest.

While the first step was to create a map of every forest in the U.S. the ultimate goal for both men was to find a way to put a carbon market on equal footing with the timber industry. Instead of cutting trees for cash, potentially landowners could find out how much it would be worth to maintain their forestland. As the company notes, forest management had previously been driven by the economics of timber harvesting, with over $10 billion spent in the US each year.

The founders at SilviaTerra thought that the carbon market could be equally as large, but it’s hard for moset landowners to access. Carbon offset projects can cost as much as $200,000 to put together, which is more than the value of the smaller offset projects for landowners like Parisa’s own family and the 40 acres they own in the Alabama forests.

There had to be a better way for smaller landowners to benefit from carbon markets too, Parisa and Nova thought.

To create this carbon economy, there needed to be a single source of record for every tree in the U.S. and while SiliviaTerra had the technology to make that map, they lacked the compute power, machine learning capabilities and resources to build the map.

That’s where Microsoft’s AI for Earth program came in.

Working with AI for Earth, TierraSilva created their first product, Basemap, to process terabytes ofsatellite imagery to determine the sizes and species of trees on every acre of America’s forestland. The company also worked with the US Forestry Service to access their data, which was used in creating this holistic view of the forest assets in the U.S.

With the data from Basemap in hand, the company has created what it calls the Natural Capital Exchange. This program uses SilviaTerra’s unparalleled access to information about local forests, and the knowledge of how those forests are currently used to supply projects that actually represent land that would have been forested were it not for the offset money coming in.

Currently, many forestry projects are being passed off to offset buyers as legitimate offsets on land that would never have been forested in the first place — rendering the project meaningless and useless in any real way as an offset for carbon dioxide emissions. 

“It’s a bloodbath out there,” said Nova of the scale of the problem with fraudulent offsets in the industry. “We’re not repackaging existing forest carbon projects and try to connect the demand side with projects that already exist. Use technology to unlock a new supply of forest carbon offset.”

The first Natural Capital Exchange project was actually launched and funded by Microsoft back in 2019. In it, 20 Western Pennsylvania land owners originated forest carbon credits through the program, showing that the offsets could work for landowners with 40 acres, or, as the company said, 40,000.

Landowners involved in SilivaTerra’s pilot carbon offset program paid for by Microsoft. Image Credit: SilviaTerra

“We’re just trying to get inside every landowners annual economic planning cycle,” said Nova. “There’s a whole field of timber economics… and we’re helping answer the question of given the price of timber, given the price of carbon does it make sense to reduce your planned timber harvests?”

Ultimately, the two founders believe that they’ve found a way to pay for the total land value through the creation of data around the potential carbon offset value of these forests.

It’s more than just carbon markets, as well. The tools that SilviaTerra have created can be used for wildfire mitigation as well. “We’re at the right place at the right time with the right data and the right tools,” said Nova. “It’s about connecting that data to the decision and the economics of all this.”

The launch of the SilviaTerra exchange gives large buyers a vetted source to offset carbon. In some ways its an enterprise corollary to the work being done by startups like Wren, another Union Square Ventures investment, that focuses on offsetting the carbon footprint of everyday consumers. It’s also a competitor to companies like Pachama, which are trying to provide similar forest offsets at scale, or 3Degrees Inc. or South Pole.

Under a Biden administration there’s even more of an opportunity for these offset companies, the founders said, given discussions underway to establish a Carbon Bank. Established through the existing Commodity Credit Corp. run by the Department of Agriculture, the Carbon Bank would pay farmers and landowners across the U.S. for forestry and agricultural carbon offset projects.

“Everybody knows that there’s more value in these systems than just the product that we harvest off of it,” said Parisa. “Until we put those benefits in the same footing as the things we cut off and send to market…. As the value of these things goes up… absolutely it is going to influence these decisions and it is a cash crop… It’s a money pump from coastal America into middle America to create these things that they need.” 

Google AI concocts ‘breakie’ and ‘cakie’ hybrid baked goods

By Devin Coldewey

If, as I suspect many of you have, you have worked your way through baking every type of cookie, bread and cake under the sun over the last year, Google has a surprise for you: a pair of AI-generated hybrid treats, the “breakie” and the “cakie.”

The origin of these new items seems to have been in a demonstration of the company’s AutoML Tables tool, a codeless model generation system that’s more spreadsheet automation than what you’d really call “artificial intelligence.” But let’s not split hairs, or else we’ll never get to the recipe.

Specifically it was the work of Sara Robinson, who was playing with these tools earlier last spring, as a person interested in machine learning and baking was likely to start doing around that time as cabin fever first took hold.

What happened was she wanted to design a system that would look at a recipe and automatically tell you whether it was bread, cookie or cake, and why — for instance, a higher butter and sugar content might bias it toward cookie, while yeast was usually a dead giveaway for bread.

Image Credits: Sara Robinson

But of course, not every recipe is so straightforward, and the tool isn’t always 100% sure. Robinson began to wonder, what would a recipe look like that the system couldn’t decide on?

She fiddled around with the ingredients until she found a balance that caused the machine learning system to produce a perfect 50/50 split between cookie and cake. Naturally, she made some — behold the “cakie.”

A cakie, left, and breakies, right, with Robinson.

A cakie, left, and breakies, right, with Robinson. Image Credits: Sara Robinson / Google

“It is yummy. And it strangely tastes like what I’d imagine would happen if I told a machine to make a cake cookie hybrid,” she wrote.

The other hybrid she put together was the “breakie,” which as you surely have guessed by now is half bread, half cookie. This one ended up a little closer to “fluffy cookies, almost the consistency of a muffin.” And indeed they look like muffin tops that have lost their bottoms. But breakie sounds better than muffin tops (or “brookie,” apparently the original name).

These ingredients and ratios were probably invented or tried long ago, but it’s certainly an interesting way to arrive at a new recipe using only old ones.

The recipes below are perfectly doable, but to be transparent were not entirely generated by algorithm. It only indicates proportions of ingredients, and didn’t include any flavorings or features like vanilla or chocolate chips, both which Robinson added. The actual baking instructions had to be puzzled out as well (the AI doesn’t know what temperature is, or pans). But if you need something to try making that’s different from the usual weekend treat, you could probably do worse than one of these.

 Image Credits: Sara Robinson / Google

 Image Credits: Sara Robinson / Google

Connecting employer healthcare plans to surgical centers of excellence nets Carrum Health $40 million

By Jonathan Shieber

Six years after launching its service linking employer-sponsored insurance plans with surgical centers of excellence, Carrum Health has raised $40 million in a new round of financing to capitalize on tailwinds propelling its business forward. 

As the COVID-19 pandemic exposes cracks in the U.S. healthcare system, one of the ways that employers have tried to manage the significant costs of insuring employees is by taking on the management of care themselves.

As they shoulder more of the burden, companies like Carrum, which offer services that manage some of the necessary points of care for businesses, at lower costs, are becoming increasingly attractive targets for investors.

That’s why Carrum was able to attract investors led by Tiger Global Management, GreatPoint Ventures and Cross Creek, all firms that joined returning investors Wildcat Venture Partners and SpringRock Ventures in backing the company’s Series A round.

Carrum said the money will go toward sales and marketing to more customers, adding more services and improving its existing technology stack.

Carrum uses machine learning to collect and analyze data on surgical outcomes and care to identify what it considers to be surgical centers of excellence across the U.S.

The company offers self-insured employers the opportunity to buy services directly from surgical centers for a bundled price. That can mean savings of up to 50% on surgical expenses.

Using Carrum, there are no co-pays, deductibles or co-insurance. Instead, Carrum Health’s customers pay a fee and in return receive a 30-day warranty on procedures, meaning that the healthcare provider will cover any costs associated with care from botched operations or complications.

Employees have access to a mobile application that gives them access to virtual care before, during and after surgeries.

“For years, the industry has talked about redesigning healthcare to benefit patients, but the only way to really do that is to tackle the underlying economics of care, a truly difficult task,” said Sach Jain, CEO and founder of Carrum Health, in a statement. “Employers now have a modern, technology-driven solution to help patients get better care without financial headache and we’re not stopping at surgery. In 2021 we’ll be expanding our reach and impact with additional services. It’s such an honor to pave the way for a better healthcare future and we’re so excited for what’s to come.”

Carrum Health’s customers include Quest Diagnostics, US Foods, and other, undisclosed organizations in retail, manufacturing, communications and insurance, the company said.

Centers of excellence on the platform include Johns Hopkins HealthCare, Mayo Clinic and Tenet Healthcare .

 

 

IPRally is building a knowledge graph-based search engine for patents

By Steve O'Hear

IPRally, a burgeoning startup out of Finland aiming to solve the patent search problem, has raised €2 million in seed funding.

Leading the round is by JOIN Capital, and Spintop Ventures, with participation from existing pre-seed backer Icebreaker VC. It brings the total raised by the 2018-founded company to €2.35 million.

Co-founded by CEO Sakari Arvela, who has 15 years experience as a patent attorney, IPRally has built a knowledge graph to help machines better understand the technical details of patents and to enable humans to more efficiently trawl through existing patients. The premise is that a graph-based approach is more suited to patent search than simple keywords or freeform text search.

That’s because, argues Arvela, every patent publication can be distilled down to a simpler knowledge graph that “resonates” with the way IP professionals think and is infinitely more machine readable.

“We founded IPRally in April 2018, after one year of bootstrapping and proof-of-concepting with my co-founder and CTO Juho Kallio,” he tells me. “Before that, I had digested the graph approach myself for about two years and collected the courage to start the venture”.

Arvela says patent search is a hard problem to solve since it involves both deep understanding of technology and the capability to compare different technologies in detail.

“This is why this has been done almost entirely manually for as long as the patent system has existed. Even the most recent out-of-the-box machine learning models are way too inaccurate to solve the problem. This is why we have developed a specific ML model for the patent domain that reflects the way human professionals approach the search task and make the problem sensible for the computers too”.

That approach appears to be paying off, with IPRally already being used by customers such as Spotify and ABB, as well as intellectual property offices. Target customers are described as any corporation that actively protects its own R&D with patents and has to navigate the IPR landscape of competitors.

Meanwhile, IPRally is not without its own competition. Arvela cites industry giants like Clarivate and Questel that dominate the market with traditional keyword search engines.

In addition, there are a few other AI-based startups, like Amplified and IPScreener. “IPRally’s graph approach makes the searches much more accurate, allows detail-level computer analysis, and offer a non-black-box solution that is explainable for and controllable by the user,” he adds.

Deep Science: Using machine learning to study anatomy, weather and earthquakes

By Devin Coldewey

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.

A number of recently published research projects have used machine learning to attempt to better understand or predict these phenomena.

This week has a bit more “basic research” than consumer applications. Machine learning can be applied to advantage in many ways users benefit from, but it’s also transformative in areas like seismology and biology, where enormous backlogs of data can be leveraged to train AI models or as raw material to be mined for insights.

Inside earthshakers

We’re surrounded by natural phenomena that we don’t really understand — obviously we know where earthquakes and storms come from, but how exactly do they propagate? What secondary effects are there if you cross-reference different measurements? How far ahead can these things be predicted?

A number of recently published research projects have used machine learning to attempt to better understand or predict these phenomena. With decades of data available to draw from, there are insights to be gained across the board this way — if the seismologists, meteorologists and geologists interested in doing so can obtain the funding and expertise to do so.

The most recent discovery, made by researchers at Los Alamos National Labs, uses a new source of data as well as ML to document previously unobserved behavior along faults during “slow quakes.” Using synthetic aperture radar captured from orbit, which can see through cloud cover and at night to give accurate, regular imaging of the shape of the ground, the team was able to directly observe “rupture propagation” for the first time, along the North Anatolian Fault in Turkey.

“The deep-learning approach we developed makes it possible to automatically detect the small and transient deformation that occurs on faults with unprecedented resolution, paving the way for a systematic study of the interplay between slow and regular earthquakes, at a global scale,” said Los Alamos geophysicist Bertrand Rouet-Leduc.

Another effort, which has been ongoing for a few years now at Stanford, helps Earth science researcher Mostafa Mousavi deal with the signal-to-noise problem with seismic data. Poring over data being analyzed by old software for the billionth time one day, he felt there had to be better way and has spent years working on various methods. The most recent is a way of teasing out evidence of tiny earthquakes that went unnoticed but still left a record in the data.

The “Earthquake Transformer” (named after a machine-learning technique, not the robots) was trained on years of hand-labeled seismographic data. When tested on readings collected during Japan’s magnitude 6.6 Tottori earthquake, it isolated 21,092 separate events, more than twice what people had found in their original inspection — and using data from less than half of the stations that recorded the quake.

Map of minor seismic events detected by the Earthquake Transformer.

Image Credits: Stanford University

The tool won’t predict earthquakes on its own, but better understanding the true and full nature of the phenomena means we might be able to by other means. “By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop,” said co-author Gregory Beroza.

National Grid sees machine learning as the brains behind the utility business of the future

By Jonathan Shieber

If the portfolio of a corporate venture capital firm can be taken as a signal for the strategic priorities of their parent companies, then National Grid has high hopes for automation as the future of the utility industry.

The heavy emphasis on automation and machine learning from one of the nation’s largest privately held utilities with a customer base numbering around 20 million people is significant. And a sign of where the industry could be going.

Since its launch, National Grid’s venture firm, National Grid Partners, has invested in 16 startups that featured machine learning at the core of their pitch. Most recently, the company backed AI Dash, which uses machine learning algorithms to analyze satellite images and infer the encroachment of vegetation on National Grid power lines to avoid outages.

Another recent investment, Aperio, uses data from sensors monitoring critical infrastructure to predict loss of data quality from degradation or cyberattacks.

Indeed, of the $175 million in investments the firm has made, roughly $135 million has been committed to companies leveraging machine learning for their services.

“AI will be critical for the energy industry to achieve aggressive decarbonization and decentralization goals,” said Lisa Lambert, the chief technology and innovation officer at National Grid and the founder and president of National Grid Partners.

National Grid started the year off slowly because of the COVID-19 epidemic, but the pace of its investments picked up and the company is on track to hit its investment targets for the year, Lambert said.

Modernization is critical for an industry that still mostly runs on spreadsheets and collective knowledge that has locked in an aging employee base, with no contingency plans in the event of retirement, Lambert said. It’s that situation that’s compelling National Grid and other utilities to automate more of their business.

“Most companies in the utility sector are trying to automate now for efficiency reasons and cost reasons. Today, most companies have everything written down in manuals; as an industry, we basically still run our networks off spreadsheets, and the skills and experience of the people who run the networks. So we’ve got serious issues if those people retire. Automating [and] digitizing is top of mind for all the utilities we’ve talked to in the Next Grid Alliance.

To date, a lot of the automation work that’s been done has been around basic automation of business processes. But there are new capabilities on the horizon that will push the automation of different activities up the value chain, Lambert said.

“ ML is the next level — predictive maintenance of your assets, delivering for the customer. Uniphore, for example: you’re learning from every interaction you have with your customer, incorporating that into the algorithm, and the next time you meet a customer, you’re going to do better. So that’s the next generation,” Lambert said. “Once everything is digital, you’re learning from those engagements — whether engaging an asset or a human being.”

Lambert sees another source of demand for new machine learning tech in the need for utilities to rapidly decarbonize. The move away from fossil fuels will necessitate entirely new ways of operating and managing a power grid. One where humans are less likely to be in the loop.

“In the next five years, utilities have to get automation and analytics right if they’re going to have any chance at a net-zero world — you’re going to need to run those assets differently,” said Lambert. “Windmills and solar panels are not [part of] traditional distribution networks. A lot of traditional engineers probably don’t think about the need to innovate, because they’re building out the engineering technology that was relevant when assets were built decades ago — whereas all these renewable assets have been built in the era of OT/IT.”

 

No rules, no problem: DeepMind’s MuZero masters games while learning how to play them

By Devin Coldewey

DeepMind has made it a mission to show that not only can an AI truly become proficient at a game, it can do so without even being told the rules. Its newest AI agent, called MuZero, accomplishes this not just with visually simple games with complex strategies, like Go, Chess, and Shogi, but with visually complex Atari games.

The success of DeepMind’s earlier AIs was at least partly due to a very efficient navigation of the immense decision trees that represent the possible actions in a game. In Go or Chess these trees are governed by very specific rules, like where pieces can move, what happens when this piece does that, and so on.

The AI that beat world champions at Go, AlphaGo, knew these rules and kept them in mind (or perhaps in RAM) while studying games between and against human players, forming a set of best practices and strategies. The sequel, AlphaGo Zero, did this without human data, playing only against itself. AlphaZero did the same with Go, Chess, and Shogi in 2018, creating a single AI model that could play all these games proficiently.

But in all these cases the AI was presented with a set of immutable, known rules for the games, creating a framework around which it could build its strategies. Think about it: if you’re told a pawn can become a queen, you plan for it from the beginning, but if you have to find out, you may develop entirely different strategies.

This helpful diagram shows what different models have achieved with different starting knowledge.

As the company explains in a blog post about their new research, if AIs are told the rules ahead of time, “this makes it difficult to apply them to messy real world problems which are typically complex and hard to distill into simple rules.”

The company’s latest advance, then, is MuZero, which plays not only the aforementioned games but a variety of Atari games, and it does so without being provided with a rulebook at all. The final model learned to play all of these games not just from experimenting on its own (no human data) but without being told even the most basic rules.

Instead of using the rules to find the best-case scenario (because it can’t), MuZero learns to take into account every aspect of the game environment, observing for itself whether it’s important or not. Over millions of games it learns not just the rules, but the general value of a position, general policies for getting ahead, and a way of evaluating its own actions in hindsight.

This latter ability helps it learn from its own mistakes, rewinding and redoing games to try different approaches that further hone the position and policy values.

You may remember Agent57, another DeepMind creation that excelled at a set of 57 Atari games. MuZero takes the best of that AI and combines it with the best of AlphaZero. MuZero differs from the former in that it does not model the entire game environment, but focuses on the parts that affect its decision-making, and from the latter in that it bases its model of the rules purely on its own experimentation and firsthand knowledge.

Understanding the game world lets MuZero effectively plan its actions even when the game world is, like many Atari games, partly randomized and visually complex. That pushes it closer to an AI that can safely and intelligently interact with the real world, learning to understand the world around it without the need to be told every detail (though it’s likely that a few, like “don’t crush humans,” will be etched in stone). As one of the researchers told the BBC, the team is already experimenting with seeing how MuZero could improve video compression — obviously a very different problem than Ms. Pac-Man.

The details of MuZero were published today in the journal Nature.

❌