The President’s Council of Advisors on Science and Technology predicts that U.S. companies will spend upward of $100 billion on AI R&D per year by 2025. Much of this spending today is done by six tech companies — Microsoft, Google, Amazon, IBM, Facebook and Apple, according to a recent study from CSET at Georgetown University. But what if you’re a startup whose product relies on AI at its core?
Can early-stage companies support a research-based workflow? At a startup or scaleup, the focus is often more on concrete product development than research. For obvious reasons, companies want to make things that matter to their customers, investors and stakeholders. Ideally, there’s a way to do both.
Before investing in staffing an AI research lab, consider this advice to determine whether you’re ready to get started.
Assuming it’s your organization’s priority to do innovative AI research, the first step is to hire one or two researchers. At Unbabel, we did this early by hiring Ph.D.s and getting started quickly with research for a product that hadn’t been developed yet. Some researchers will build from scratch and others will take your data and try to find a pre-existing model that fits your needs.
While Google’s X division may have the capital to focus on moonshots, most startups can only invest in innovation that provides them a competitive advantage or improves their product.
From there, you’ll need to hire research engineers or machine learning operations professionals. Research is only a small part of using AI in production. Research engineers will then release your research into production, monitor your model’s results and refine the model if it stops predicting well (or otherwise is not operating as planned). Often they’ll use automation to simplify monitoring and deployment procedures as opposed to doing everything manually.
None of this falls within the scope of a research scientist — they’re most used to working with the data sets and models in training. That said, researchers and engineers will need to work together in a continuous feedback loop to refine and retrain models based on actual performance in inference.
The CSET research cited above shows that 85% of AI labs in North America and Europe do some form of basic AI research, and less than 15% focus on development. The rest of the world is different: A majority of labs in other countries, such as India and Israel, focus on development.
Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.
What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.
Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.
Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.
CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.
As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.
Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.
Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.
So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”
Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.
Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.
Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.
When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.
Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.
It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.
Promoting governance does not stop with the board and CEO; CTOs play an important role, too.
Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.
It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.
These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.
PayPal’s plan to morph itself into a “superapp” has been given a go for launch.
According to PayPal CEO Dan Schulman, speaking to investors during this week’s second-quarter earnings call, the initial version of PayPal’s new consumer digital wallet app is now “code complete” and the company is preparing to slowly ramp up. Over the next several months, PayPal expects to be fully ramped up in the U.S., with new payment services, financial services, commerce and shopping tools arriving every quarter.
The company has spoken for some time about its “superapp” ambitions — a shift in product direction that would make PayPal a U.S.-based version of something like China’s WeChat or Alipay or India’s Paytm. Like those apps, PayPal aims to offer a host of consumer services under one roof, beyond just mobile payments.
In previous quarters, PayPal said these new features may include things like enhanced direct deposit, check cashing, budgeting tools, bill pay, crypto support, subscription management, and buy now, pay later functionality. It also said it would integrate commerce, thanks to the mobile shopping tools acquired by way of its $4 billion Honey acquisition in 2019.
So far, PayPal has continued to run Honey as a standalone application, website and browser extension, but the superapp could incorporate more of its deal-finding functions, price-tracking features and other benefits.
On Wednesday’s earnings call, Schulman revealed the superapp would have a few other features as well, including high-yield savings, early access to direct deposit funds and messaging functionality outside of peer-to-peer payments — meaning you could chat with family and friends directly through the app’s user interface.
PayPal hadn’t announced its plans to include a messaging component until now, but the feature makes sense in terms of how people often combine chat and peer-to-peer payments today. For example, someone may want to make a personal request for the funds instead of just sending an automated request through an app. Or, after receiving payment, a user may want to respond with a “thank you,” or other acknowledgment. Currently, these conversations take place outside of the payment app itself on platforms like iMessage. Now, that could change.
“We think that’s going to drive a lot of engagement on the platform,” said Schulman. “You don’t have to leave the platform to message back and forth.”
With the increased user engagement, the company expects to see a bump in average revenue per active account.
Schulman also hinted at “additional crypto capabilities,” which were not detailed. However, PayPal earlier this month increased the crypto purchase limit from $20,000 to $100,000 for eligible PayPal customers in the U.S., with no annual purchase limit. The company also this year made it possible for consumers to check out at millions of online businesses using their cryptocurrencies, by first converting the crypto to cash then settling with the merchant in U.S. dollars.
Though the app’s code is now complete, Schulman said the plan is to continue to iterate on the product experience, noting that the initial version will not be “the be-all and end-all.” Instead, the app will see steady releases and new functionality on a quarterly basis.
However, he did say that early on, the new features would include the high-yield savings, improved bill pay with a better user experience, and more billers and aggregators, as well as early access to direct deposit, budgeting tools and the new two-way messaging feature.
To integrate all the new features into the superapp, PayPal will undergo a major overhaul of its user interface.
“Obviously, the [user experience] is being redesigned,” Schulman noted. “We’ve got rewards and shopping. We’ve got a whole giving hub around crowdsourcing, giving to charities. And then, obviously, buy now, pay later will be fully integrated into it. … The last time I counted, it was like 25 new capabilities that we’re going to put into the superapp.”
The digital wallet app will also be personalized to the end user, so no two apps are the same. This will be done using both artificial intelligence and machine learning capabilities to “enhance each customer’s experiences and opportunities,” said Schulman.
PayPal delivered an earnings beat in the second quarter with $6.24 billion in revenue, versus the $6.27 billion Wall Street expected, and earnings per share of $1.15, versus the $1.12 expected. Total payment volume from merchant customers also jumped 40% to $311 billion, while analysts had projected $295.2 billion. But the company’s stock slipped due to a lowered outlook for Q3, impacted by eBay’s transition to its own managed payments service.
In addition, PayPal gained 11.4 million net new active accounts in the quarter, to reach 403 million total active accounts.
Anomaly detection is one of the more difficult and underserved operational areas in the asset-servicing sector of financial institutions. Broadly speaking, a true anomaly is one that deviates from the norm of the expected or the familiar. Anomalies can be the result of incompetence, maliciousness, system errors, accidents or the product of shifts in the underlying structure of day-to-day processes.
For the financial services industry, detecting anomalies is critical, as they may be indicative of illegal activities such as fraud, identity theft, network intrusion, account takeover or money laundering, which may result in undesired outcomes for both the institution and the individual.
There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.
Detecting outlier data, or anomalies according to historic data patterns and trends can enrich a financial institution’s operational team by increasing their understanding and preparedness.
Anomaly detection presents a unique challenge for a variety of reasons. First and foremost, the financial services industry has seen an increase in the volume and complexity of data in recent years. In addition, a large emphasis has been placed on the quality of data, turning it into a way to measure the health of an institution.
To make matters more complicated, anomaly detection requires the prediction of something that has not been seen before or prepared for. The increase in data and the fact that it is constantly changing exacerbates the challenge further.
There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.
Imagine a world where no one’s privacy is breached, no faces are scanned into a gargantuan database, and no privacy laws are broken. This is a world that is fast approaching. Could companies simply dump the need for real-world CCTV footage, and switch to synthetic humans, acting out potential scenarios a million times over? That’s the tantalizing prospect of a new UK startup that has attracted funding from an influential set of investors.
UK-based Mindtech Global has developed what it describes as an end-to-end synthetic data creation platform. In plain English, its system can imagine visual scenarios such as someone’s behavior inside a store, or crossing the street. This data is then used to train AI-based computer vision systems for customers such as big retailers, warehouse operators, healthcare, transportation systems and robotics. It literally trains a ‘synthetic’ CCTV camera inside a synthetic world.
That last investor is significant. In-Q-Tel invests in startups that support US intelligence capabilities and is based in Arlington, Virginia…
Mindtech’s Chameleon platform is designed to help computers understand and predict human interactions. As we all know, current approaches to training AI vision systems require companies to source data such as CCTV footage. The process is fraught with privacy issues, costly, and time-consuming. Mindtech says Chameleon solves that problem, as its customers quickly “build unlimited scenes and scenarios using photo-realistic smart 3D models”.
An added bonus is that these synthetic humans can be used to train AI vision systems to weed out human failings around diversity and bias.
Mindtech CEO Steve Harris
Steve Harris, CEO, Mindtech said: “Machine learning teams can spend up to 80% of their time sourcing, cleaning, and organizing training data. Our Chameleon platform solves the AI training challenge, freeing the industry to focus on higher-value tasks like AI network innovation. This round will enable us to accelerate our growth, enabling a new generation of AI solutions that better understand the way humans interact with each other and the world around them.”
So what can you do with it? Consider the following: A kid slips from its parent’s hand at the mall. The synthetic CCTV running inside Mindtech’s scenario is trained thousands of times over how to spot it in real-time and alert staff. Another: a delivery robot meets kids playing in a street and works out how to how to avoid them. Finally: a passenger on the platform is behaving erratically too close to the rails – the CCTV is trained to automatically spot them and send help.
Nat Puffer, Managing Director (London), In-Q-Tel commented: “Mindtech impressed us with the maturity of their Chameleon platform and their commercial traction with global customers. We’re excited by the many applications this platform has across diverse markets and its ability to remove a significant roadblock in the development of smarter, more intuitive AI systems.”
Miles Kirby, CEO, Deeptech Labs said: “As a catalyst for deeptech success, our investment, and accelerator program supports ambitious teams with novel solutions and the appetite to build world-changing companies. Mindtech’s highly-experienced team are on a mission to disrupt the way AI systems are trained, and we’re delighted to support their journey.”
There is of course potential for darker applications, such a spotting petty theft inside supermarkets, or perhaps ‘optimising’ hard-pressed warehouse workers in some dystopian fashion. However, in theory, Mindtech’s customers can use this platform to rid themselves of the biases of middle-managers, and better serve customers.
Today, Tractable is worth $1 billion. Our AI is used by millions of people across the world to recover faster from road accidents, and it also helps recycle as many cars as Tesla puts on the road.
And yet six years ago, Tractable was just me and Raz (Razvan Ranca, CTO), two college grads coding in a basement. Here’s how we did it, and what we learned along the way.
In 2013, I was fortunate to get into artificial intelligence (more specifically, deep learning) six months before it blew up internationally. It started when I took a course on Coursera called “Machine learning with neural networks” by Geoffrey Hinton. It was like being love struck. Back then, to me AI was science fiction, like “The Terminator.”
Narrowly focusing on a branch of applied science that was undergoing a paradigm shift which hadn’t yet reached the business world changed everything.
But an article in the tech press said the academic field was amid a resurgence. As a result of 100x larger training data sets and 100x higher compute power becoming available by reprogramming GPUs (graphics cards), a huge leap in predictive performance had been attained in image classification a year earlier. This meant computers were starting to be able to understand what’s in an image — like humans do.
The next step was getting this technology into the real world. While at university — Imperial College London — teaming up with much more skilled people, we built a plant recognition app with deep learning. We walked our professor through Hyde Park, watching him take photos of flowers with the app and laughing from joy as the AI recognized the right plant species. This had previously been impossible.
I started spending every spare moment on image classification with deep learning. Still, no one was talking about it in the news — even Imperial’s computer vision lab wasn’t yet on it! I felt like I was in on a revolutionary secret.
Looking back, narrowly focusing on a branch of applied science undergoing a breakthrough paradigm shift that hadn’t yet reached the business world changed everything.
I’d previously been rejected from Entrepreneur First (EF), one of the world’s best incubators, for not knowing anything about tech. Having changed that, I applied again.
The last interview was a hackathon, where I met Raz. He was doing machine learning research at Cambridge, had topped EF’s technical test, and published papers on reconstructing shredded documents and on poker bots that could detect bluffs. His bare-bones webpage read: “I seek data-driven solutions to currently intractable problems.” Now that had a ring to it (and where we’d get the name for Tractable).
That hackathon, we coded all night. The morning after, he and I knew something special was happening between us. We moved in together and would spend years side by side, 24/7, from waking up to Pantera in the morning to coding marathons at night.
But we also wouldn’t have got where we are without Adrien (Cohen, president), who joined as our third co-founder right after our seed round. Adrien had previously co-founded Lazada, an online supermarket in South East Asia like Amazon and Alibaba, which sold to Alibaba for $1.5 billion. Adrien would teach us how to build a business, inspire trust and hire world-class talent.
Tractable started at EF with a head start — a paying customer. Our first use case was … plastic pipe welds.
It was as glamorous as it sounds. Pipes that carry water and natural gas to your home are made of plastic. They’re connected by welds (melt the two plastic ends, connect them, let them cool down and solidify again as one). Image classification AI could visually check people’s weld setups to ensure good quality. Most of all, it was real-world value for breakthrough AI.
And yet in the end, they — our only paying customer — stopped working with us, just as we were raising our first round of funding. That was rough. Luckily, the number of pipe weld inspections was too small a market to interest investors, so we explored other use cases — utilities, geology, dermatology and medical imaging.
Online learning continues to see a huge boost of attention and use in the wake of the Covid-19 pandemic, and today a startup building tools specifically for enterprises to deliver on their internal education remits is announcing a big round of funding that points to the startup’s own growth and ambitions.
Go1, which provides curated online learning materials and tools to businesses, with “playlists” that tap content from multiple publishers and silos, has closed a round of $200 million, a Series D that the Australian company’s CEO and co-founder confirmed values the startup at over $1 billion.
Barnes added that the funding will be used to expand further in existing markets — based out of Brisbane, Australia, Go1 has offices in London, the U.S., Singapore and Malaysia, so it wants to go deeper into Europe more broadly and into more of Asia Pacific, he said. Go1 will also continue expanding its suite of services in the wider areas of learning and development training, he added.
Today, it already offers a host of analytics and AI tech to chart how well that content is used and to further personalize materials, so the idea will be to expand on that more.
SoftBank’s Vision Fund 2, AirTree Ventures and Salesforce Ventures co-led this Series D, with Blue Cloud Ventures, Larsen Ventures, Madrona Venture Group, Microsoft’s M12, SEEK, TEN13, and Tiger Global also participating. (To be clear it appears that there were reports about this Series D closing but no details on the value, the investors, nor confirmation from the company.)
The funding represents a major capital infusion for the startup: prior to this it had only raised about $80 million over the last six years, with the last round, a more modest Series C of $40 million, closed 14 months ago.
But it also comes on the heels of impressive growth. Incubated at Y Combinator and based out of Brisbane, Australia, the company currently works with some 3.5 million users and over 1,600 enterprises globally, with companies like Microsoft, TikTok, the University of Oxford, Suzuki, Asahi and Thrifty, as well as many smaller businesses, among its customers. On average, an individual, when actively engaging on Go1, spends between two and six hours per month using the platform, and Barnes told me that its user base has grown by more than 300 percent in the last year.
But in a tech world now full of options for online learning content — both for K-12 as well as business users — what is perhaps more interesting is the startup’s approach.
Currently, Go1 has some 150,000 pieces of content available in its library, but it has not created any of that itself. The material comes from some 1,000 publishers and creators, a figure that is growing weekly, said Barnes, and includes not just your standard names in online education like Pearson, EdX, Coursera and Skillsoft, but also Blinkist and the Harvard Business Review.
The point of Go1 is to make it easier for businesses to access and use all these materials without having to negotiate separate deals with the various rights holders, or for users to have to negotiate multiple apps or sites to use it.
Somewhat akin to a streaming service like Spotify, Go1 acts not just as a distributor/aggregator to access that content, but as a channel for those providers, who receive royalties based on how much their content is consumed. (And individual rights holders can also negotiate how some or all of their content is accessed, in the event that they have paywalls that they do not want to break down in specific areas.)
The Spotify analogy goes beyond the company’s business model: Barnes pointed out that it too calls its curated bundles — which it creates itself, or lets customers create themselves — “playlists.”
“We started the business six years ago because no one else was doing this, yet there was such a desire to bring together that diversity of content and make it easily available,” he said.
The challenge for employers is not just navigating the user experience of juggling multiple sites (which Go1 solves with these curated playlists), but also building learning that is still cohesive and easy to manage, regardless of which department or employee is doing the training.
“How do I create something for the broad diversity of skills for our workforce?” is how Barnes described it to me. This is what the company addresses with the platform, he added, not only making it easier to create training for different people, but to help them find, and to suggest, relevant content that will interest those users by offering as big a selection as possible. “We help people find the needle in the haystack,” he said.
Where the analogy stops, it seems, is in how Go1 interfaces with the rest of the corporate learning market.
I asked Barnes if he saw companies like Success Factors as competitors, but in reality, Go1’s ethos is to integrate into whatever education or training platform a company might already use, be it SAP, Workday, Salesforce or Microsoft-based platforms, or something else altogether.
Borrowing another media comparison, Barnes notes that he sees Go1 as occupying the “Netflix” button on a remote: regardless of the manufacturer or pay-TV provider, you still have a way to get your Netflix fix; and so, too, is the hope for Go1 in corporate learning and development training.
This also means that while platforms are not rivals, others also aggregating content might well be: that likely makes for an interesting relationship with Microsoft, given that it owns LinkedIn, which has LinkedIn Learning, which also aggregates content from across a wide range of publishers.
It seems that while Microsoft has slowly created more integrations with LinkedIn over the years since it’s acquired it, this is one area where it’s also been okay with working with one of its competitors.
“Our team worked closely with Go1 on a Microsoft Teams integration to enable more enterprises to maintain corporate training remotely,” said Jeff Teper, Microsoft Corporate Vice President, Teams, OneDrive, SharePoint, said in a statement. “As many companies navigate in-person work scenarios, a plan for hybrid engagement is critical. Employees and students can access one of the world’s largest libraries of online learning resources with Go1 in Microsoft Teams. Companies can also onboard new talent and ensure essential trainings are provided regardless of employee location.”
One way that Go1 is looking to grow is in how it is used by the individuals that learn or train on its platform.
Another reason Barnes and his co-founders — Vu Tran (head of growth), Chris Eigeland (CRO), and Chris Hood (CTO) — started Go1, he said, was because of a pain point one of them directly encountered. Tran was doing his training to become a doctor at the time, and he found it very frustrating that he had re-do hand washing training each time he started a new rotation.
“There was no way to re-share that he’d already done that,” Barnes said. Go1 is trying to double down on that, increasing the ability for its users to “own” those credentials and certifications and re-use them in subsequent places, even when they change jobs. (Again… not unlike exporting a Spotify playlist, which you can also do.)
It seems that I am not the only one who sees a lot of Spotify resonance in Go1.
“When people think about music, they often think of Spotify and access to unlimited music for one subscription. We believe Go1 is the emerging category leader in providing a similar experience for corporate learning. Powered by AI and machine learning, Go1’s platform provides an intuitive experience, and creates an opportunity for individuals to expand their professional development goals and explore the resources to help achieve them,” said Nagraj Kashyap, managing partner at SoftBank Investment Advisers, in a statement.
Gembah’s mission statement is a deceptively simple one. The Austin-based company says it’s looking to “democratize product innovation by drastically lowering barriers to entry for creation of new products.” In that respect, at least, it’s not so dissimilar from various startup initiatives that have arrived over the past decade and change, from crowdfunding to additive manufacturing.
The company’s product is a platform/marketplace designed to guide users through the product-creation process, promising results in “as little as 90 days.” The forum connects smaller business connect to factories, supply chain experts, designers, engineers, etc. to help speed up the process. Just ask anyone who has attempted to launch a hardware startup — these things can be massive difficult to navigate.
To help accelerate its own vision, Gembah has raised an $11 million Series A, led by local firm ATX Venture Partners along with Silverton, Flexport, Brett Hurt, Jim Curry and Dan Graham.
Image Credits: Gembah
It follows a $3.28 million seed led by Silverton announced in April of last year, bringing its total funding up to $14.75 million.
The company says the pandemic has actually been something of a boon for its business model, as hardware startups are looking toward a more online model – and something a bit closer to home than the traditional sales channels. The company says its revenue grew 500% in 2020 and is on track to triple revenues this year. It’s impressive growth in the face of some major supply chain issues that have impacted the industry during the past year and a half.
It currently has 300 active customers, though it was yet to achieve profitability — hence the new round. “Since most of our customers are e-commerce companies we benefited from the accelerated growth of e-commerce,” CEO and co-founder Henrik Johansson tells TechCrunch. “Supply chains have been impacted to some degree, but as the global supply chain gets more complex and many companies want to diversify outside of China, they need help to navigate that change, and Gembah can help with that transition.”
The funding will go toward increasing the company’s engineering team. At present, Gembah has 55 employees in the U.S, and 19 in other locations, including Asia and Mexico. The new headcount will be focused on growing the marketplace, supply chain workflow and machine-learning capabilities. Gembah will also look to grow its global network and make additional hires in marketing and UI/UX.
“Gembah is a true innovator poised to help businesses capitalize on the growth of global eCommerce,” ATX Venture Partners’ Chris Shonksaid in a statement. “The Gembah marketplace promises to unlock virtually unlimited entrepreneurial equity by enabling a whole new breed of creators to enter the market.”
The tectonic shifts to American culture and society due to the pandemic are far from over. One of the more glaring ones is that the U.S. labor market is going absolutely haywire.
Millions are unemployed, yet companies — from retail to customer service to airlines — can’t find enough workers. This perplexing paradox behind Uber price surges and waiting on an endless hold because your flight was canceled isn’t just inconvenient — it’s a loud and clear message from the post-pandemic American workforce. Many are underpaid, undervalued and underwhelmed in their current jobs, and are willing to change careers or walk away from certain types of work for good.
It’s worth noting that low-wage workers aren’t the only ones putting their foot down; white-collar quits are also at an all-time high. Extended unemployment benefits implemented during the pandemic may be keeping some workers on the sidelines, but employee burnout and job dissatisfaction are also primary culprits.
We have a wage problem and an employee satisfaction problem, and Congress has a long summer ahead of it to attempt to find a solution. But what are companies supposed to do in the meantime?
Adopting AI in manufacturing accelerated during the pandemic to deal with volatility in the supply chain, but now it must move from “pilot purgatory” to widespread implementation.
At this particular moment, businesses need a stopgap solution either until September, when COVID-19 relief and unemployment benefits are earmarked to expire, or something longer term and more durable that not only keeps the engine running but propels the ship forward. Adopting AI can be the key to both.
Declaring that we’re on the precipice of an AI awakening is probably nowhere near the most shocking thing you’ve read this year. But just a few short years ago, it would have frightened a vast number of people, as advances in automation and AI began to transform from a distant idea into a very personal reality. People were (and some holdouts remain) genuinely worried about losing their job, their lifeline, with visions of robots and virtual agents taking over.
But does this “AI takes jobs” storyline hold up in the cultural and economic moment we’re in?
If this “labor shortage” unveils any silver lining, it’s our real-world version of the Sorting Hat. When you take money out of the equation on the question of employment, it’s opening our eyes to what work people find desirable and, more evidently, what’s not. Specifically, the manufacturing, retail and service industries are taking the hardest labor hits, underscoring that tasks associated with those jobs — repetitive duties, unrewarding customer service tasks and physical labor — are driving more and more potential workers away.
Adopting AI in manufacturing accelerated during the pandemic to deal with volatility in the supply chain, but now it must move from “pilot purgatory” to widespread implementation. The best use cases for AI in this industry are ones that help with supply chain optimization, including quality inspection, general supply chain management and risk/inventory management.
Most critically, AI can predict when equipment might fail or break, reducing costs and downtime to almost zero. Industry leaders believe that AI is not only beneficial for business continuity but that it can augment the work and efficiency of existing employees rather than displace them. AI can assist employees by providing real-time guidance and training, flagging safety hazards, and freeing them up to do less repetitive, low-skilled work by taking on such tasks itself, such as detecting potential assembly line defects.
In the manufacturing industry, this current labor shortage is not a new phenomenon. The industry has been facing a perception problem in the U.S. for a long time, mainly because young workers think manufacturers are “low tech” and low paying. AI can make existing jobs more attractive and directly lead to a better bottom line while also creating new roles for companies that attract subject-matter talent and expertise.
In the retail and service industries, arduous customer service tasks and low pay are leading many employees to walk out the door. Those that are still sticking it out have their hands tied because of their benefits, even though they are unhappy with the work. Conversational AI, which is AI that can interact with people in a human-like manner by leveraging natural language processing and machine learning, can relieve employees of many of the more monotonous customer experience interactions so they can take on roles focused on elevating retail and service brands with more cerebral, thoughtful human input.
Many retail and service companies adopted scripted chatbots during the pandemic to help with the large online volumes only to realize that chatbots operate on a fixed decision tree — meaning if you ask something out of context, the whole customer service process breaks down. Advanced conversational AI technologies are modeled on the human brain. They even learn as they go, getting more skilled over time, presenting a solution that saves retail and service employees from the mundane while boosting customer satisfaction and revenue.
Hesitancy and misconceptions about AI in the workplace have long been a barrier to widespread adoption — but companies experiencing labor shortages should consider where it can make their employees’ lives better and easier, which can only be a benefit for bottom-line growth. And it might just be the big break that AI needs.
A new biometrics privacy ordinance has taken effect across New York City, putting new limits on what businesses can do with the biometric data they collect on their customers.
From Friday, businesses that collect biometric information — most commonly in the form of facial recognition and fingerprints — are required to conspicuously post notices and signs to customers at their doors explaining how their data will be collected. The ordinance applies to a wide range of businesses — retailers, stores, restaurants, and theaters, to name a few — which are also barred from selling, sharing, or otherwise profiting from the biometric information that they collect.
The move will give New Yorkers — and its millions of visitors each year — greater protections over how their biometric data is collected and used, while also serving to dissuade businesses from using technology that critics say is discriminatory and often doesn’t work.
Businesses can face stiff penalties for violating the law, but can escape fines if they fix the violation quickly.
The law is by no means perfect, as none of these laws ever are. For one, it doesn’t apply to government agencies, including the police. Of the businesses that the ordinance does cover, it exempts employees of those businesses, such as those required to clock in and out of work with a fingerprint. And the definition of what counts as a biometric will likely face challenges that could expand or narrow what is covered.
New York is the latest U.S. city to enact a biometric privacy law, after Portland, Oregon passed a similar ordinance last year. But the law falls short of stronger biometric privacy laws in effect.
Illinois, which has the Biometric Information Privacy Act, a law that grants residents the right to sue for any use of their biometric data without consent. Facebook this year settled for $650 million in a class-action suit that Illinois residents filed in 2015 after the social networking giant used facial recognition to tag users in photos without their permission.
Albert Fox Cahn, the executive director of the New York-based Surveillance Technology Oversight Project, said the law is an “important step” to learn how New Yorkers are tracked by local businesses.
“A false facial recognition match could mean having the NYPD called on you just for walking into a Rite Aid or Target,” he told TechCrunch. He also said that New York should go further by outlawing systems like facial recognition altogether, as some cities have done.
Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the more interesting recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.
This week we have a number of entries aimed at identifying or confirming bias or cheating behaviors in machine learning systems, or failures in the data that support them. But first a purely visually appealing project from the University of Washington being presented at the Conference on Computer Vision and Pattern Recognition.
They trained a system that recognizes and predicts the flow of water, clouds, smoke and other fluid features in photos, animating them from a single still image. The result is quite cool:
Why, though? Well, for one thing, the future of photography is code, and the better our cameras understand the world they’re pointed at, the better they can accommodate or recreate it. Fake river flow isn’t in high demand, but accurately predicting movement and the behavior of common photo features is.
An important question to answer in the creation and application of any machine learning system is whether it’s actually doing the thing you want it to. The history of “AI” is riddled with examples of models that found a way to look like they’re performing a task without actually doing it — sort of like a kid kicking everything under the bed when they’re supposed to clean their room.
This is a serious problem in the medical field, where a system that’s faking it could have dire consequences. A study, also from UW, finds models proposed in the literature have a tendency to do this, in what the researchers call “shortcut learning.” These shortcuts could be simple — basing an X-ray’s risk on the patient’s demographics rather than the data in the image, for instance — or more unique, like relying heavily on conditions in the hospital its data is from, making it impossible to generalize to others.
The team found that many models basically failed when used on datasets that differed from their training ones. They hope that advances in machine learning transparency (opening the “black box”) will make it easier to tell when these systems are skirting the rules.
Image Credits: Siegfried Modola (opens in a new window) / Getty Images
Seattle-based Edge Delta, a startup that is building a modern distributed monitoring stack that is competing directly with industry heavyweights like Splunk, New Relic and Datadog, today announced that it has raised a $15 million Series A funding round led by Menlo Ventures and Tim Tully, the former CTO of Splunk. Previous investors MaC Venture Capital and Amity Ventures also participated in this round, which brings the company’s total funding to date to $18 million.
“Our thesis is that there’s no way that enterprises today can continue to analyze all their data in real time,” said Edge Delta co-founder and CEO Ozan Unlu, who has worked in the observability space for about 15 years already (including at Microsoft and Sumo Logic). “The way that it was traditionally done with these primitive, centralized models — there’s just too much data. It worked 10 years ago, but gigabytes turned into terabytes and now terabytes are turning into petabytes. That whole model is breaking down.”
He acknowledges that traditional big data warehousing works quite well for business intelligence and analytics use cases. But that’s not real-time and also involves moving a lot of data from where it’s generated to a centralized warehouse. The promise of Edge Delta is that it can offer all of the capabilities of this centralized model by allowing enterprises to start to analyze their logs, metrics, traces and other telemetry right at the source. This, in turn, also allows them to get visibility into all of the data that’s generated there, instead of many of today’s systems, which only provide insights into a small slice of this information.
While competing services tend to have agents that run on a customer’s machine, but typically only compress the data, encrypt it and then send it on to its final destination, Edge Delta’s agent starts analyzing the data right at the local level. With that, if you want to, for example, graph error rates from your Kubernetes cluster, you wouldn’t have to gather all of this data and send it off to your data warehouse where it has to be indexed before it can be analyzed and graphed.
With Edge Delta, you could instead have every single node draw its own graph, which Edge Delta can then combine later on. With this, Edge Delta argues, its agent is able to offer significant performance benefits, often by orders of magnitude. This also allows businesses to run their machine learning models at the edge, as well.
“What I saw before I was leaving Splunk was that people were sort of being choosy about where they put workloads for a variety of reasons, including cost control,” said Menlo Ventures’ Tim Tully, who joined the firm only a couple of months ago. “So this idea that you can move some of the compute down to the edge and lower latency and do machine learning at the edge in a distributed way was incredibly fascinating to me.”
Edge Delta is able to offer a significantly cheaper service, in large part because it doesn’t have to run a lot of compute and manage huge storage pools itself since a lot of that is handled at the edge. And while the customers obviously still incur some overhead to provision this compute power, it’s still significantly less than what they would be paying for a comparable service. The company argues that it typically sees about a 90 percent improvement in total cost of ownership compared to traditional centralized services.
Edge Delta charges based on volume and it is not shy to compare its prices with Splunk’s and does so right on its pricing calculator. Indeed, in talking to Tully and Unlu, Splunk was clearly on everybody’s mind.
“There’s kind of this concept of unbundling of Splunk,” Unlu said. “You have Snowflake and the data warehouse solutions coming in from one side, and they’re saying, ‘hey, if you don’t care about real time, go use us.’ And then we’re the other half of the equation, which is: actually there’s a lot of real-time operational use cases and this model is actually better for those massive stream processing datasets that you required to analyze in real time.”
But despite this competition, Edge Delta can still integrate with Splunk and similar services. Users can still take their data, ingest it through Edge Delta and then pass it on to the likes of Sumo Logic, Splunk, AWS’s S3 and other solutions.
“If you follow the trajectory of Splunk, we had this whole idea of building this business around IoT and Splunk at the Edge — and we never really quite got there,” Tully said. “I think what we’re winding up seeing collectively is the edge actually means something a little bit different. […] The advances in distributed computing and sophistication of hardware at the edge allows these types of problems to be solved at a lower cost and lower latency.”
The Edge Delta team plans to use the new funding to expand its team and support all of the new customers that have shown interest in the product. For that, it is building out its go-to-market and marketing teams, as well as its customer success and support teams.
In many industries, Databricks has become synonymous with modern data warehousing and data lakes. Since it’s exactly these technologies that are at the core of what modern businesses are doing around operationalizing their data, data engineering and building machine-learning models — and since Databricks is at the forefront of startups that offer these services on a SaaS-like platform, who better to join us at TC Sessions: SaaS on October 27 than Databricks co-founder and CEO Ali Ghodsi.
Ghodsi co-founded Databricks together with a handful of partners in 2013 with the idea of commercializing the open-source Apache Spark analytics engine for big data processing. As is the case with so many open-source companies, Ghodsi, who has a Ph.D. from KTH/Royal Institute of Technology in Sweden and whose research focused on distributed computing, was one of the original developers of the Spark engine. At Databricks, he first served as the company’s VP of Engineering and Product Management before being named CEO in 2016.
Under his leadership, Databricks has reached a $28 billion valuation and has now raised a total of $1.9 billion. The company’s bets on open source, data and AI are clearly paying off and unlike some of its competitors, Databricks has done a good job staying ahead of the trends (and had a bit of luck given that some of those trends, including the rise of machine learning, really benefitted the company, too).
Despite consistent rumors of Microsoft and others trying to acquire the company in recent years, Ghodsi and his board have clearly decided that they want to remain independent. Instead, Databricks has shrewdly partnered with all of the big cloud players, starting with Microsoft, which actually gave the service the kind of prime placement in its Azure cloud computing service and user interface that was previously unheard of. Most recently, the company brought its platform to Google Cloud.
Ghodsi will join us at TC Sessions: SaaS to talk about building his company, raising funding at crazy valuations and what the future of data management in the AI space looks like.
$75 Early Bird ticket sales end October 1. Grab your ticket today and gain insights on how to scale your B2B and B2C company from CEOs who have done it themselves. Meet the founders building with low code/no code, meet the investors cutting the checks, and discover the next generation of SaaS startups bridging data with new technologies.
Industries like real estate, automotive, and financial services have long and offline sales cycles and digital advertising tends not to perform well in these areas. The conversion rates are low and because the real-world assets are offline the temptation of advertisers is to buy leads and clicks, which can inflate customer acquisition costs. People are browsing but they end up buying offline, basically.
A new startup, Tomi plans to address this issue by processing a user’s behavior on a company’s website (using a tracking pixel, combined with ad APIs and CRMs) to help companies reach customers more in the way an ecommerce business would.
It’s now raised a $1M seed round from investors including Begin Capital and Phystech Leadership Fund.
Founded by Konstantin Bayandin — a former senior director of digital marketing and technology at Compass and chief marketing officer at Ozon, ‘Russia’s Amazon’ — Tomi competes against similar AdTech companies such as Anytrack, Sociaro, Meetotis, Alytics and Postclick.
However, the difference, Bayandin says, is that Tomi “focuses on offline conversions and works with multiple ad channels, such as Facebook, Instagram and Google.”
Bayandin says: “Real-estate companies would love to leverage online ads in order to sell their inventory but it turns out to be too expensive and difficult. People like to browse but rarely convert and most of these transactions happen offline. So real-estate clients don’t know how to optimize for their real buyers. Tomi uses machine learning to analyze the way real buyers browse the website and optimize ad campaigns towards conversions.”
The background to all this is that with Apple closing down IDFA, Google planning to remove third-party cookies from its Chrome browser, and the latest iOS 14.5 update allowing users opt out of “personalized ads”, the entire ad business is in flux, so new tools are going to be required. Bayandin says Tomi is part of this new wave of AdTech.
AI has been filling in the gaps for illustrators and photographers for years now — literally, it intelligently fills gaps with visual content. But the latest tools are aimed at letting an AI give artists a hand from the earliest, blank-canvas stages of a piece. Nvidia’s new Canvas tool lets the creator rough in a landscape like paint-by-numbers blobs, then fills it in with convincingly photorealistic (if not quite gallery-ready) content.
Each distinct color represents a different type of feature: mountains, water, grass, ruins, etc. When colors are blobbed onto the canvas, the crude sketch is passed to a generative adversarial network. GANs essentially pass content back and forth between a creator AI that tries to make (in this case) a realistic image and a detector AI that evaluates how realistic that image is. These work together to make what they think is a fairly realistic depiction of what’s been suggested.
It’s pretty much a more user-friendly version of the prototype GauGAN (get it?) shown at CVPR in 2019. This one is much smoother around the edges, produces better imagery, and can run on any Windows computer with a decent Nvidia graphics card.
This method has been used to create very realistic faces, animals, and landscapes, though there’s usually some kind of “tell” that a human can spot. But the Canvas app isn’t trying to make something indistinguishable from reality — as concept artist Jama Jurabaev explains in the video below, it’s more about being able to experiment freely with imagery more detailed than a doodle.
For instance, if you want to have a moldering ruin in a field with a river off to one side, a quick pencil sketch can only tell you so much about what the final piece might look like. What if you have it one way in your head, and then two hours of painting and coloring later you realize that because the sun is setting on the left side of the painting, it makes the shadows awkward in the foreground?
If instead you just scribbled these features into Canvas, you might see that this was the case right away, and move on to the next idea. There are even ways to quickly change the time of day, palette, and other high-level parameters so they can quickly be evaluated as options.
“I’m not afraid of blank canvas any more,” said Jurabaev. “I’m not afraid to make very big changes, because I know there’s always AI helping me out with details… I can put all my effort into the creative side of things, and I’ll let canvas handle the rest.”
It’s very like Google’s Chimera Painter, if you remember that particular nightmare fuel, in which an almost identical process was used to create fantastic animals. Instead of snow, rock, and bushes it had hind leg, fur, teeth and so on, which made it rather more complicated to use and easy to go wrong with.
Still, it may be better than the alternative, for certainly an amateur like myself could never draw even the weird tube-like animals that resulted from basic blob painting.
Unlike the Chimera Creator, however, this app is run locally, and requires a beefy Nvidia video card to do it. GPUs have long been the hardware of choice for machine learning applications, and something like a real-time GAN definitely needs a chunky one. You can download the app for free here.
Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round.
Vantage started out with a focus on making the AWS console a bit easier to use — and help businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.
“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said.”What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”
Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.
“But one consistent thing — across the board — was that people were having a really, really hard time twelve times a year, where they would get a shock AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.
Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.
While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.
“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog Cloudflare Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”
That is likely the vision the investors bought in as well and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.
“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and Cloud Health are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.
The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped to company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.
The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.
Insilico Medicine, an AI-based platform for drug development and discovery announced $255 million in Series C financing on Tuesday. The massive round is reflective of a recent breakthrough for the company: proof that it’s AI based platform can create a new target for a disease, develop a bespoke molecule to address it, and begin the clinical trial process.
It’s also yet another indicator that AI and drug discovery continues to be especially attractive for investors.
Insilico Medicine is a Hong Kong-based company founded in 2014 around one central premise: that AI assisted systems can identify novel drug targets for untreated diseases, assist in the development of new treatments, and eventually predict how well those treatments may perform in clinical trials. Previously, the company had raised $51.3 million in funding, according to Crunchbase.
Insilico Medicine’s aim to use AI to drive drug development isn’t particularly new, but there is some data to suggest that the company might actually accomplish that gauntlet of discovery all the way through trial prediction. In 2020, the company identified a novel drug target for idiopathic pulmonary fibrosis, a disease in which tiny air sacs in the lungs become scarred, which makes breathing laborious.
Two AI-based platforms first identified 20 potential targets, narrowed it down to one, and then designed a small molecule treatment that showed promise in animal studies. The company is currently filing an investigational new drug application with the FDA and will begin human dosing this year, with aims to begin a clinical trial late this year or early next year.
The focus here isn’t on the drug, though, it’s on the process. This project condensed the process of preclinical drug development that typically takes multiple years and hundreds of millions of dollars into just 18 months, for a total cost of about $2.6 million. Still, founder Alex Zhavoronkov doesn’t think that Insilico Medicine’s strengths lie primarily in accelerating preclinical drug development or reducing costs: its main appeal is in eliminating an element of guesswork in drug discovery, he suggests.
“Currently we have 16 therapeutic assets, not just IPF,” he says. “It definitely raised some eyebrows.”
“It’s about the probability of success,” he continues. “So the probability of success of connecting the right target to the right disease with a great molecule is very, very low. The fact that we managed to do it in IPF and other diseases I can’t talk about yet – it increases confidence in AI in general.”
Bolstered partially by the proof-of-concept developed by the IPF project and enthusiasm around AI based drug development, Insilico Medicine attracted a long list of investors in this most recent round.
The round is led by Warburg Pincus, but also includes investment from Qiming Venture Partners, Pavilion Capital, Eight Roads Ventures, Lilly Asia Ventures, Sinovation Ventures, BOLD Capital Partners, Formic Ventures, Baidu Ventures, and new investors. Those include CPE, OrbiMed, Mirae Asset Capital, B Capital Group, Deerfield Management, Maison Capital, Lake Bleu Capital, President International Development Corporation, Sequoia Capital China and Sage Partners.
This current round was oversubscribed four-fold, according to Zhavoronkov.
A 2018 study of 63 drugs approved by the FDA between 2009 and 2018 found that the median capitalized research and development investment needed to bring a drug to market was $985 million, which also includes the cost of failed clinical trials.
Those costs and the low likelihood of getting a drug approved has initially slowed the process of drug development. R&D returns for biopharmaceuticals hit a low of 1.6 percent in 2019, and bounced back to a measly 2.5 percent in 2020 according to a 2021 Deloitte report.
Ideally, Zhavoronkov imagines an AI-based platform trained on rich data that can cut down on the amount of failed trials. There are two major pieces of that puzzle: PandaOmics, an AI platform that can identify those targets; and Chemistry 42, a platform that can manufacture a molecule to bind to that target.
“We have a tool, which incorporates more than 60 philosophies for target discovery,” he says.
“You are betting something that is novel, but at the same time you have some pockets of evidence that strengthen your hypothesis. That’s what our AI does very well.”
Although the IPF project has not been fully published in a peer-reviewed journal, a similar project published in Nature Biotechnology was. In that paper, Insilco’s deep learning model was able to identify potential compounds in just 21 days.
The IPF project is a scale-up of this idea. Zhavoronkov doesn’t just want to identify molecules for known targets, he wants to find new ones and shepherd them all the way through clinical trials. And, indeed, also to continue to collect data during those clinical trials that might improve future drug discovery projects.
“So far nobody has challenged us to solve a disease in partnership” he says. “If that happens, I’ll be a very happy man.”
That said, Insilico Medicine’s approach to novel target discovery has been used piecemeal, too. For instance, Insilico Medicine has collaborated with Pfizer on novel target discovery, and Johnson and Johnson on small molecule design and done both with Taisho Pharmaceuticals. Today, the company also announced a new partnership with Teva Branded Pharmaceutical Products R&D, Inc. Teva will aim to use PandaOmics to identify new drug targets.
That said, it’s not just Insilico Medicine raking in money and partnerships. The whole field of AI-based novel targets has been experiencing significant hype.
In 2019 Nature noted that at least 20 partnerships between major drug companies and AI drug discovery tech companies had been reported. In 2020, investment in AI companies pursuing drug development increased to $13.9 billion, a four-fold increase from 2019, per Stanford University’s Artificial Intelligence Index annual report. R&D cost
Drug discovery projects received the greatest amount of private AI investment in 2020, a trend that can partially be attributed to the pandemic’s need for rapid drug development. However, the roots of the hype predate Covid-19.
Zhavorokov is aware that AI based drug development is riding a bit of a hype wave right now. “Companies without substantial evidence supporting their AI powered drug discovery claims manage to raise very quickly,” he notes.
Insilico Medicine, he says, can distinguish itself based on the quality of its investors. “Our investors don’t gamble,” he says.
But like so many other AI-based drug discovery platforms, we’ll have to see whether they make it through the clinical trial churn.