The President’s Council of Advisors on Science and Technology predicts that U.S. companies will spend upward of $100 billion on AI R&D per year by 2025. Much of this spending today is done by six tech companies — Microsoft, Google, Amazon, IBM, Facebook and Apple, according to a recent study from CSET at Georgetown University. But what if you’re a startup whose product relies on AI at its core?
Can early-stage companies support a research-based workflow? At a startup or scaleup, the focus is often more on concrete product development than research. For obvious reasons, companies want to make things that matter to their customers, investors and stakeholders. Ideally, there’s a way to do both.
Before investing in staffing an AI research lab, consider this advice to determine whether you’re ready to get started.
Assuming it’s your organization’s priority to do innovative AI research, the first step is to hire one or two researchers. At Unbabel, we did this early by hiring Ph.D.s and getting started quickly with research for a product that hadn’t been developed yet. Some researchers will build from scratch and others will take your data and try to find a pre-existing model that fits your needs.
While Google’s X division may have the capital to focus on moonshots, most startups can only invest in innovation that provides them a competitive advantage or improves their product.
From there, you’ll need to hire research engineers or machine learning operations professionals. Research is only a small part of using AI in production. Research engineers will then release your research into production, monitor your model’s results and refine the model if it stops predicting well (or otherwise is not operating as planned). Often they’ll use automation to simplify monitoring and deployment procedures as opposed to doing everything manually.
None of this falls within the scope of a research scientist — they’re most used to working with the data sets and models in training. That said, researchers and engineers will need to work together in a continuous feedback loop to refine and retrain models based on actual performance in inference.
The CSET research cited above shows that 85% of AI labs in North America and Europe do some form of basic AI research, and less than 15% focus on development. The rest of the world is different: A majority of labs in other countries, such as India and Israel, focus on development.
Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.
What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.
Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.
Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.
CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.
As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.
Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.
Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.
So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”
Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.
Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.
Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.
When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.
Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.
It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.
Promoting governance does not stop with the board and CEO; CTOs play an important role, too.
Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.
It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.
These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.
Pet pharmacy Mixlab has developed a digital platform enabling veterinarians to prescribe medications and have them delivered — sometimes on the same day — to pet parents.
The New York-based company raised a $20 million Series A in a round of funding led by Sonoma Brands and including Global Founders Capital, Monogram Capital, Lakehouse Ventures and Brand Foundry. The new investment gives Mixlab total funding of $30 million, said Fred Dijols, co-founder and CEO of Mixlab.
Dijols and Stella Kim, chief experience officer, co-founded Mixlab in 2017 to provide a better pharmacy experience, with the veterinarian at the center.
Dijols’ background is in medical devices as well as healthcare investment banking, where he became interested in the pharmacy industry, following TruePill and PillPack, which he told TechCrunch were “creating a modern pharmacy model.”
As more pharmacy experiences revolved around at-home delivery, he found the veterinary side of pharmacy was not keeping up. He met Kim, a user experience expert, whose family owns a pharmacy, and wanted to bring technology into the industry.
“The pharmacy industry is changing a lot, and technology allows us to personalize the care and experience for the veterinarian, pet parent and the pet,” Kim said. “Customer service is important in healthcare as is dignity and empathy. We kept that in mind when starting Mixlab. Many companies use technology to remove the human element, but we use it to elevate it.”
Mixlab’s technology includes a digital service for veterinarians to streamline their daily medication workflow and gives them back time to spend with patient care. The platform manages the home delivery of medications across branded, generic and over-the-counter medications, as well as reduces a clinic’s on-site pharmacy inventories. Veterinarians can write prescriptions in seconds and track medication progress and therapy compliance.
The company also operates its own compound pharmacy where it specializes in making medications on-demand that are flavored and dosed.
On the pet parent side, they no longer have to wait up to a week for medications nor have to drive over to the clinic to pick them up. Medications come in a personalized care package that includes a note from the pharmacist, clear and easy-to-read instructions and a new toy.
Over the past year, adoptions of pets spiked as more people were at home, also leading to an increase in vet visits. This also caused the global pet care industry to boom, and it is now projected to reach $343 billion by 2030, when it had been valued at $208 billion in 2020.
Pet parents are also spending more on their pets, and a Morgan Stanley report showed that they see pets as part of their family, and as a result, 37% of people said they would take on debt to pay for a pet’s medical expenses, while 29% would put a pet’s needs before their own.
To meet the increased demand in veterinary care, the company will use the new funding to improve its technology and expand into more locations where it can provide same-day delivery. Currently it is shipping to 47 states and Dijols expects to be completely national by the end of the year. He also expects to hire more people on both the sales team and in executive leadership positions.
The company is already operating in New York and Los Angeles and growing 3x year over year, though Dijols admits operating during the pandemic was a bit challenging due to “a massive surge of orders” that came in as veterinarians had to shut down their offices.
As part of the investment, Keith Levy, operating partner at Sonoma Brands and former president of pet food manufacturer Royal Canin USA, will join Mixlab’s board of directors. Sonoma Brands is focused on growth sectors of the consumer economy, and pets was one of the areas that investors were interested in.
Over time, Sonoma found that within the veterinary community, there was space for a lot of players. However, veterinarians want to home in on one company they trust, and Mixlab fit that description for many because they were getting medication out faster, Levy said.
“What Mixlab is doing isn’t completely unique, but they are doing it better,” he added. “When we looked at their customer service metrics, we saw they had a good reputation and were relentlessly focused on providing a better experience.”
One year after voice-based AI technology company ConverseNow raised a $3.3 million seed round, the company is back with a cash infusion of $15 million in Series A funding in a round led by Craft Ventures.
The Austin-based company’s AI voice ordering assistants George and Becky work inside quick-serve restaurants to take orders via phone, chat, drive-thru and self-service kiosks, freeing up staff to concentrate on food preparation and customer service.
Joining Craft in the Series A round were LiveOak Venture Partners, Tensility Venture Partners, Knoll Ventures, Bala Investments, 2048 Ventures, Bridge Investments, Moneta Ventures and angel investors Federico Castellucci and Ashish Gupta. This new investment brings ConverseNow’s total funding to $18.3 million, Vinay Shukla, co-founder and CEO of ConverseNow, told TechCrunch.
As part of the investment, Bryan Rosenblatt, partner at Craft Ventures, is joining the company’s board of directors, and said in a written statement that “post-pandemic, quick-service restaurants are primed for digital transformation, and we see a unique opportunity for ConverseNow to become a driving force in the space.”
At the time when ConverseNow raised its seed funding in 2020, it was piloting its technology in just a handful of stores. Today, it is live in over 750 stores and grew seven times in revenue and five times in headcount.
Restaurants were some of the hardest-hit industries during the pandemic, and as they reopen, Shukla said their two main problems will be labor and supply chain, and “that is where our technology intersects.”
The AI assistants are able to step in during peak times when workers are busy to help take orders so that customers are not waiting to place their orders, or calls get dropped or abandoned, something Shukla said happens often.
It can also drive more business. ConverseNow said it is shown to increase average orders by 23% and revenue by 20%, while adding up to 12 hours of extra deployable labor time per store per week.
Company co-founder Rahul Aggarwal said more people prefer to order remotely, which has led to an increase in volume. However, the more workers have to multitask, the less focus they have on any one job.
“If you step into restaurants with ConverseNow, you see them reimagined,” Aggarwal said. “You find workers focusing on the job they like to do, which is preparing food. It is also driving better work balance, while on the customer side, you don’t have to wait in the queue. Operators have more time to churn orders, and service time comes down.”
ConverseNow is one of the startups within the global restaurant management software market that is forecasted to reach $6.94 billion by 2025, according to Grand View Research. Over the past year, startups in the space attracted both investors and acquirers. For example, point-of-sale software company Lightspeed acquired Upserve in December for $430 million. Earlier this year, Sunday raised $24 million for its checkout technology.
The new funding will enable ConverseNow to continue developing its line-busting technology and invest in marketing, sales and product innovation. It will also be working on building a database from every conversation and onboarding new customers quicker, which involves inputting the initial menu.
By leveraging artificial intelligence, the company will be able to course-correct any inconsistencies, like background noise on a call, and better predict what a customer might be saying. It will also correct missing words and translate the order better. In the future, Shukla and Aggarwal also want the platform to be able to tell what is going on around the restaurant — what traffic is like, the weather and any menu promotions to drive upsell.
Exo, pronounced “echo,” raised a fresh cash infusion of $220 million in Series C financing aimed at commercializing its handheld ultrasound device and point-of-care workflow platform, Exo Works.
The round was led by RA Capital Management, while BlackRock, Sands Capital, Avidity Partners, Pura Vida Investments and prior investors joined in.
The new funding gives the Redwood City, California-based company over $320 million in total investments since the company was founded in 2015, Exo CEO Sandeep Akkaraju told TechCrunch. This includes a $40 million investment raised in 2020.
Ultrasound machines can cost anywhere from $40,000 to $250,000 for low-end technology and into the millions for high-end machines. Meanwhile, Exo’s device will be around the cost of a laptop.
“It is clear to us that ultrasound is the future — it is nonradiating and has no harmful side effects,” Akkaraju said. “We want to take the technology and put it in the palms of physicians. We also want to bring it down to the patient level. The beauty of having this window into the body is you can immediately see things.”
Using a combination of artificial intelligence, medical imaging and silicon technology, the device enables users to use it in a number of real-world medical environments like evaluating cardiology patients or scanning lungs of a COVID-19 patient. It can also be used by patients at home to provide real-time insight following a surgical procedure or to monitor a certain condition.
Exo then adds in its Exo Works, the workflow platform, that streamlines exam review, documentation and billing in under one minute.
Akkaraju said the immediate focus of the company is commercializing the device, which is where most of the new funding will go. He intends to also build out its informatics platform that is being piloted across the country and to ramp up both production and its sales force.
The global point-of-care ultrasound market is expected to reach $3.1 billion by 2025 and will grow 5% annually over that period. In addition to physicians, Akkaraju is hearing from other hospital workers that they, too, want to use the ultrasound device for some of their daily tasks like finding the right vein for an IV.
Once the company’s device is approved by the U.S. Food and Drug Administration, Exo will move forward with its plan to bring the handheld ultrasound device to market.
Zach Scheiner, principal with RA Capital Management, said he met the Exo team in 2020 and RA made its first investment in the Series B extension later that year.
He was “immediately compelled” by the technology and the opportunity to scale. Scheiner also got to know Akkaraju over the months as well as saw how Exo’s technology was improving.
“We are seeing an expanding opportunity in healthcare technology as it improves and costs go down,” he added. “The vision Sandeep has of democratizing the ultrasound is not a vision that was possible 15 or 20 years ago. We are seeing the market in its early stage, but we also recognize the potential. Every doctor should want one to see what they were not able to see before. As technology and biology improves, we are going to see this sector grow.”
Globally, 225 million people are estimated to suffer from moderate or severe visual impairments, and 49.1 million are blind, according to 2020 data from the Investigative Ophthalmology and Visual Science journal. A Japanese startup that was incubated at Honda Motor Company’s business creation program hopes to make navigating the world easier and safer for the visually impaired.
Ashirase, which debuted as the first business venture to come out of Honda’s Ignition program in June, shared details of its in-shoe navigation system for low-vision walkers on Tuesday. The system aims to help users achieve more independence in their daily lives by allowing them to feel which way to walk through in-shoe vibrations connected to a navigation app on a smartphone. Ashirase hopes to begin sales of the system, also named Ashirase, by October 2022.
Honda created Ignition in 2017 to feature original technology, ideas, and designs of Honda associates with the goal of solving social issues and going beyond the existing Honda business. CEO Wataru Chino had previously worked at Honda since 2008 on R&D for EV motor control and automated driving systems. Chino’s background is evident in the navigation system’s technology, which he said is inspired by advanced driver assist and autonomous driving systems.
“The overlap perspective can be, for instance, the way we utilize sensor information,” Chino told TechCrunch. “We use a sensor fusion technology, meaning we can combine information from the different sensors. I have experience in that field myself so that is helpful. Plus there is overlap with automated driving because when we were thinking of safety walking, the automated driving technology had given us an idea for the concept.”
“Ashirase” comes from the Japanese words ashi, meaning “foot,” and shirase, meaning “notification.” As its name suggests, the device, which is attached to the shoe, vibrates to provide navigation based on the route set within an app. Motion sensors, which consist of an accelerometer, gyro sensors and orientation sensors, enable the system to understand how the user is walking.
While en route outside, the system localizes the user based on global navigation satellite positioning information and data based on the user’s foot movement. Ashirase’s app is connected to a range of different map vendors like Google Maps, and Chino said the device can switch to adapt to different information available on different maps. This capability might be helpful if, say, one map had updated information about a road blockage and could send over-the-air updates.
“Going forward, we want to develop the function to generate a map itself using sensors from the outdoor environment, but that’s maybe five years down the line,” Chino said.
The vibrators are aligned with the foot’s nerve layer, so it’s easy to feel the pulse. To indicate the user should walk straight ahead, the vibrator positioned at the front of the shoe vibrates. Vibrators on the left and the right side of the shoe also indicate turning signals for the walker.
Ashirase says this form of intuitive navigation helps the walker attain a more relaxed state of mind rather than one that is constantly alert, leading to a safer walk and less stress for the user.
This also allows the user to have more attention to spare for audible warnings in their environment, like, for example, if they were at a crosswalk, because the device cannot warn the user of obstacles ahead.
“Going forward, we’re thinking about technical updates for users who are totally blind because they don’t have such information like obstacle awareness like low-vision people,” Chino said. “So at this moment, the device is designed for low-vision walkers.”
While indoors, like in a shopping mall, the GPS won’t reach the user, and there isn’t a map for them to localize to. To solve for this, the company says its plan is to use WiFi or Bluetooth-based positioning, connecting to other devices and cell phones within the store, to localize the visually impaired person.
Ashirase is also considering ways to integrate with public transit systems so that the device can alert a user if they have arrived or are near their next stop, according to Chino.
It’s a lot of tech to pack into one little device that attaches to a shoe — any shoe. Chino said the device, which only needs to be charged once a week based on three hours of use per day, is made to be flexible and fit onto different types, shapes and sizes of shoes.
Ashirase intends to release its beta version for testing and data collection in October or November this year and hopes to achieve mass production by October 2022. It’ll have a direct-to-consumer model, the price of which the company is not yet ready to disclose, and a subscription model, which should cost about 2,000 to 3,000 Japanese Yen ($18 to $27) per month.
Chino estimates it’ll take the company 200 million Yen ($1.8 million), including the funds the company has already raised, to make it to market. So far, the company has raised 70 million Yen ($638,000), which came in the form of an equity investor round and some non-equity rounds, according to Chino.
Honda maintains an investor role in the company, supporting and following the business along the way, but Ashirase’s aim is to go public as a standalone company.
Anomaly detection is one of the more difficult and underserved operational areas in the asset-servicing sector of financial institutions. Broadly speaking, a true anomaly is one that deviates from the norm of the expected or the familiar. Anomalies can be the result of incompetence, maliciousness, system errors, accidents or the product of shifts in the underlying structure of day-to-day processes.
For the financial services industry, detecting anomalies is critical, as they may be indicative of illegal activities such as fraud, identity theft, network intrusion, account takeover or money laundering, which may result in undesired outcomes for both the institution and the individual.
There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.
Detecting outlier data, or anomalies according to historic data patterns and trends can enrich a financial institution’s operational team by increasing their understanding and preparedness.
Anomaly detection presents a unique challenge for a variety of reasons. First and foremost, the financial services industry has seen an increase in the volume and complexity of data in recent years. In addition, a large emphasis has been placed on the quality of data, turning it into a way to measure the health of an institution.
To make matters more complicated, anomaly detection requires the prediction of something that has not been seen before or prepared for. The increase in data and the fact that it is constantly changing exacerbates the challenge further.
There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.
VOCHI, a Belarus-based startup behind a clever computer vision-based video editing app used by online creators, has raised an additional $2.4 million in a “late-seed” round that follows the company’s initial $1.5 million round led by Ukraine-based Genesis Investments last year. The new funds follow a period of significant growth for the mobile tool, which is now used by over 500,000 people per month and has achieved a $4 million-plus annual run rate in a year’s time.
Investors in the most recent round include TA Ventures, Angelsdeck, A.Partners, Startup Wise Guys, Kolos VC, and angels from other Belarus-based companies like Verv and Bolt. Along with the fundraise, VOCHI is elevating the company’s first employee, Anna Bulgakova, who began as head of marketing, to the position of co-founder and Chief Product Officer.
According to VOCHI co-founder and CEO lya Lesun, the company’s idea was to provide an easy way for people to create professional edits that could help them produce unique and trendy content for social media that could help them stand out and become more popular. To do so, VOCHI leverages a proprietary computer-vision-based video segmentation algorithm that applies various effects to specific moving objects in a video or to images in static photos.
“To get this result, there are two trained [convolutional neural networks] to perform semi-supervised Video Object Segmentation and Instance Segmentation,” explains Lesun, of VOCHI’s technology. “Our team also developed a custom rendering engine for video effects that enables instant application in 4K on mobile devices. And it works perfectly without quality loss,” he adds. It works pretty fast, too — effects are applied in just seconds.
The company used the initial seed funding to invest in marketing and product development, growing its catalog to over 80 unique effects and more than 30 filters.
Image Credits: VOCHI
Today, the app offers a number of tools that let you give a video a particular aesthetic (like a dreamy vibe, artistic feel, or 8-bit look, for example). It can also highlight the moving content with glowing lines, add blurs or motion, apply different filters, insert 3D objects into the video, add glitter or sparkles, and much more.
In addition to editing their content directly, users can swipe through a vertical home feed in the app where they can view the video edits others have applied to their own content for inspiration. When they see something they like, they can then tap a button to use the same effect on their own video. The finished results can then be shared out to other platforms, like Instagram, Snapchat and TikTok.
Though based in Belarus, most of VOCHI’s users are young adults from the U.S. Others hail from Russia, Saudi Arabia, Brazil and parts of Europe, Lesun says.
Unlike some of its video editor rivals, VOCHI offers a robust free experience where around 60% of the effects and filters are available without having to pay, along with other basic editing tools and content. More advanced features, like effect settings, unique presents and various special effects require a subscription. This subscription, however, isn’t cheap — it’s either $7.99 per week or $39.99 for 12 weeks. This seemingly aims the subscription more at professional content creators rather than a casual user just looking to have fun with their videos from time to time. (A one-time purchase of $150 is also available, if you prefer.)
To date, around 20,000 of VOCHI’s 500,000 monthly active users have committed to a paid subscription, and that number is growing at a rate of 20% month-over-month, the company says.
Image Credits: VOCHI
The numbers VOCHI has delivered, however, aren’t as important as what the startup has been through to get there.
The company has been growing its business at a time when a dictatorial regime has been cracking down on opposition, leading to arrests and violence in the country. Last year, employees from U.S.-headquartered enterprise startup PandaDoc were arrested in Minsk by the Belarus police, in an act of state-led retaliation for their protests against President Alexander Lukashenko. In April, Imaguru, the country’s main startup hub, event and co-working space in Minsk — and birthplace of a number of startups, including MSQRD, which was acquired by Facebook — was also shut down by the Lukashenko regime.
Meanwhile, VOCHI was being featured as App of the Day in the App Store across 126 countries worldwide, and growing revenues to around $300,000 per month.
“Personal videos take an increasingly important place in our lives and for many has become a method of self-expression. VOCHI helps to follow the path of inspiration, education and provides tools for creativity through video,” said Andrei Avsievich, General Partner at Bulba Ventures, where VOCHI was incubated. “I am happy that users and investors love VOCHI, which is reflected both in the revenue and the oversubscribed round.”
The additional funds will put VOCHI on the path to a Series A as it continues to work to attract more creators, improve user engagement, and add more tools to the app, says Lesun.
Absci Corp., a Vancouver company behind a multi-faceted drug development platform, went public on Thursday. It’s another sign of snowballing interest in new approaches to drug development – a traditionally risky business.
Absci focuses on speeding drug development in the preclinical stages. The company has developed and acquired a handful of tools that can predict drug candidates, identify potential therapeutic targets, and test therapeutic proteins on billions of cells and identify which ones are worth pursuing.
“We are offering a fully-integrated end-to-end solution for pharmaceutical drug development,” Absci founder Sean McClain tells TechCrunch. “Think of this as the Google index search for protein drug discovery and biomanufacturing.”
The IPO was initially priced at $16 per share, with a pre-money valuation of about $1.5 billion, per S-1 filings. The company is offering 12.5 million shares of common stock, with plans to raise $200 million. However, Absci stock has already ballooned to $21 per share as of writing. Common stock is trading under the ticker “ABSI.”
The company has elected to go public now, McClain says, to increase the company’s ability to attract and retain new talent. “As we continue to rapidly grow and scale, we need access to the best talent, and the IPO gives us amazing visibility for talent acquisition and retention,” says McClain.
Absci was founded in 2011 with a focus on manufacturing proteins in E.Coli. By 2018, the company had launched its first commercial product called SoluPro – a biogeneered E.Coli system that can build complex proteins. In 2019, the company scaled this process up by implementing a “protein printing” platform.
Since its founding Absci has grown to 170 employees and raised $230 million – the most recent influx was a $125 million crossover financing round closed in June 2020 led by Casdin Capital and Redmile Group. But this year, two major acquisitions have rounded out Absci’s offerings from protein manufacturing and testing to AI-enabled drug development.
In January 2021, Absci acquired Denovium, a company using deep learning AI to categorize and predict the behavior of proteins. Denovium’s “engine” had been trained on more than 100 million proteins. In June, the company also acquired Totient, a biotech company that analyzes the immune system’s response to certain diseases. At the time of Totient’s acquisition, the company had already reconstructed 4,500 antibodies gleaned from immune system data from 50,000 patients.
Absci already had protein manufacturing, evaluation and screening capabilities, but the Totient acquisition allowed it to identify potential targets for new drugs. The Denovium acquisition added an AI-based engine to aid in protein discovery.
“What we’re doing is now feeding [our own data] into deep learning models and so that is why we acquired Denovium. Prior to Totient we were doing drug discovery and cell line development. This [acquisition] allows us to go fully integrated where we can now do target discovery as well,” McClain says.
These two acquisitions place Absci into a particularly active niche in the drug development world.
To start with, there’s been some noteworthy fiscal interest in developing new approaches to drug development, even after decades of low returns on drug R&D. In the first half of 2021, Evaluate reported that new drug developers raised about $9 billion in IPOs on Western exchanges. This is despite the fact that drug development is traditionally high risk. R&D returns for biopharmaceuticals hit a record low of 1.6 percent in 2019, and have rebounded to only about 2.5 percent, a Deloitte 2021 report notes.
Within the world of drug development, we’ve seen AI play an increasingly large role. That same Deloitte report notes that “most biopharma companies are attempting to integrate AI into drug discovery, and development processes.” And, drug discovery projects received the greatest amount of AI investment dollars in 2020, according to Stanford University’s Artificial Intelligence Index annual report.
More recently, the outlook on the use of AI in drug development has been bolstered by companies that have moved a candidate through the stages of pre-clinical development.
In June, Insilico Medicine, a Hong Kong-based startup, announced that it had brought an A.I-identified drug candidate for idiopathic pulmonary fibrosis through the preclinical testing stages – a feat that helped close a $255 million Series C round. Founder Alexander Zharaonkov told TechCrunch the PI drug would begin a clinical trial on the drug late this year or early next year.
With a hand in AI and in protein manufacturing, Absci has already positioned itself in a crowded, but hype-filled space. But going forward, the company will still have to work out the details of its business model.
Absci is pursuing a partnership business model with drug manufacturers. This means that the company doesn’t have plans to run clinical trials of its own. Rather, it expects to earn revenue through “milestone payments” (conditional upon reaching certain stages of the drug development process) or, if drugs are approved, royalties on sales.
This does offer some advantages, says McClain. The company is able to sidestep the risk of drug candidates failing after millions of R&D cash is poured into testing and can invest in developing “hundreds” of drug candidates at once.
At this point, Absci does have nine currently “active programs” with drugmakers. The company’s cell line manufacturing platforms are in use in drug testing programs at eight biopharma companies, including Merck, Astellas, and Alpha Cancer technologies (the rest are undisclosed). Five of these projects are in the preclinical stage, one is in Phase 1 clinical trials, one is in a Phase 3 clinical trial, and the last is focused on animal health, per the company’s S-1 filing.
One company, Astellas, is currently using Absci’s discovery platforms. But McClain notes that Absci has only just rolled out its drug discovery capabilities this year.
However, none of these partners have formally licensed any of Absci’s platforms for clinical or commercial use. McClain notes that the nine active programs have milestones and royalty “potentials” associated with them.
The company does have some ground to make up when it comes to profitability. So far this year, Absci has generated about $4.8 million in total revenue – up from about $2.1 million in 2019. Still, the costs have remained high, and S-1 filings note that the company has incurred net losses in the past two years. In 2019, the company reported $6.6 million in net losses in 2019 and $14.4 million in net losses in 2020.
The company’s S-1 chalks up these losses to expenditures related to cost of research and development, establishing an intellectual property portfolio, hiring personnel, raising capital and providing support for these activities.
Absci has recently completed the construction of a 77,000 square foot facility, notes McClain. So going forward the company does foresee the potential to increase the scale of its operations.
In the immediate future, the company plans to use money raised from the IPO to grow the number of programs using Absci’s technology, invest in R&D and continue to refine the company’s new AI-based products.
With more venture funding flowing into the startup ecosystem than ever before, there’s never been a better time to be a growth expert.
At TechCrunch Early Stage: Marketing and Fundraising earlier this month, Greylock Partners’ Mike Duboe dug into a number of lessons and pieces of wisdom he’s picked up leading growth at a number of high-growth startups, including StitchFix. His advice spanned hiring, structure and analysis, with plenty of recommendations for where growth teams should be focusing their attention and resources.
Before Duboe’s presentation kicked off, he spent some time zeroing in on a definition of growth, which he cautioned can mean many different things at many different companies. Being so context-dependent means that “being good at growth” is more dependent on honing capabilities rather than following a list of best practices.
Growth is something that’s blatantly obvious and poorly defined in the startup world, so I do think it’s important to give a preamble to all of this stuff. First and foremost, growth is very context dependent; some teams treat it as a product function, others marketing, some sales or “other.” Some companies will do growth with a dedicated growth team; others have abandoned the team but still do it equally well. Some companies will goal growth teams purely on acquisition, others will deploy them against retention or other metrics. So, taking a step back from that, I define growth as a function that accelerates a company’s pace of learning.
Growth is everyone’s job; if a bunch of people in the company are working on one problem, and it’s just someone off in the corner working on growth, you probably failed at setting up the org correctly. (Timestamp: 1:11)
While growth is good, growing something that is unsustainable is an intense waste of time and money. Head of growth is often an early role that founders aim to fill, but Duboe cautioned early-stage entrepreneurs from focusing too heavily on growth before nailing the fundamentals.
I’ve seen many companies make the mistake of working on growth prior to nailing product-market fit. I think this mistake becomes even more common in an environment where there’s rampant VC funding, so while some of the discipline here is useful early on, I’d really encourage founders to be laser-focused on finding that fit before iterating on growth. (Timestamp: 2:29)
The bulk of Duboe’s presentation focused on laying out 10 of the “most poignant and generalizable” lessons in growth that he’s learned over the years, with lessons on focus, optimization and reflection.
Growth modeling and metric design — I view as the most fundamental part of growth. This does not require a growth team so any good head of growth should require some basic growth model to prioritize what to work on. (Timestamp: 3:09)
The first point Duboe touched on was one on how to visualize your growth opportunities using models, using an example from his past role leading growth at Tilt, where his team used user state models to determine where to direct resources and look for growth opportunities.
The second lesson is to prioritize retention before driving acquisition, a very obvious or intuitive lesson, but it’s also easy to forget given it’s typically less straightforward to figure out how to retain users versus acquiring new ones. (Timestamp: 4:19)
Retention is typically cheaper than acquiring wholly new users, Duboe noted, also highlighting how a startup focusing on retention can help them understand more about who their power users are and who exactly they should be building for.
Bringing on new ideas is obviously a positive, but often ideas need guidelines to be helpful, and setting the right templates early on can help team members filter down their ideas while ensuring they meet the need of the organization.
Imagine a world where no one’s privacy is breached, no faces are scanned into a gargantuan database, and no privacy laws are broken. This is a world that is fast approaching. Could companies simply dump the need for real-world CCTV footage, and switch to synthetic humans, acting out potential scenarios a million times over? That’s the tantalizing prospect of a new UK startup that has attracted funding from an influential set of investors.
UK-based Mindtech Global has developed what it describes as an end-to-end synthetic data creation platform. In plain English, its system can imagine visual scenarios such as someone’s behavior inside a store, or crossing the street. This data is then used to train AI-based computer vision systems for customers such as big retailers, warehouse operators, healthcare, transportation systems and robotics. It literally trains a ‘synthetic’ CCTV camera inside a synthetic world.
That last investor is significant. In-Q-Tel invests in startups that support US intelligence capabilities and is based in Arlington, Virginia…
Mindtech’s Chameleon platform is designed to help computers understand and predict human interactions. As we all know, current approaches to training AI vision systems require companies to source data such as CCTV footage. The process is fraught with privacy issues, costly, and time-consuming. Mindtech says Chameleon solves that problem, as its customers quickly “build unlimited scenes and scenarios using photo-realistic smart 3D models”.
An added bonus is that these synthetic humans can be used to train AI vision systems to weed out human failings around diversity and bias.
Mindtech CEO Steve Harris
Steve Harris, CEO, Mindtech said: “Machine learning teams can spend up to 80% of their time sourcing, cleaning, and organizing training data. Our Chameleon platform solves the AI training challenge, freeing the industry to focus on higher-value tasks like AI network innovation. This round will enable us to accelerate our growth, enabling a new generation of AI solutions that better understand the way humans interact with each other and the world around them.”
So what can you do with it? Consider the following: A kid slips from its parent’s hand at the mall. The synthetic CCTV running inside Mindtech’s scenario is trained thousands of times over how to spot it in real-time and alert staff. Another: a delivery robot meets kids playing in a street and works out how to how to avoid them. Finally: a passenger on the platform is behaving erratically too close to the rails – the CCTV is trained to automatically spot them and send help.
Nat Puffer, Managing Director (London), In-Q-Tel commented: “Mindtech impressed us with the maturity of their Chameleon platform and their commercial traction with global customers. We’re excited by the many applications this platform has across diverse markets and its ability to remove a significant roadblock in the development of smarter, more intuitive AI systems.”
Miles Kirby, CEO, Deeptech Labs said: “As a catalyst for deeptech success, our investment, and accelerator program supports ambitious teams with novel solutions and the appetite to build world-changing companies. Mindtech’s highly-experienced team are on a mission to disrupt the way AI systems are trained, and we’re delighted to support their journey.”
There is of course potential for darker applications, such a spotting petty theft inside supermarkets, or perhaps ‘optimising’ hard-pressed warehouse workers in some dystopian fashion. However, in theory, Mindtech’s customers can use this platform to rid themselves of the biases of middle-managers, and better serve customers.
DeepMind and several research partners have released a database containing the 3D structures of nearly every protein in the human body, as computationally determined by the breakthrough protein folding system demonstrated last year, AlphaFold. The freely available database represents an enormous advance and convenience for scientists across hundreds of disciplines and domains, and may very well form the foundation of a new phase in biology and medicine.
The AlphaFold Protein Structure Database is a collaboration between DeepMind, the European Bioinformatics Institute, and others, and consists of hundreds of thousands of protein sequences with their structures predicted by AlphaFold — and the plan is to add millions more to create a “protein almanac of the world.”
“We believe that this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date, and is a great example of the kind of benefits AI can bring to society,” said DeepMind founder and CEO Demis Hassabis.
If you’re not familiar with proteomics in general — and it’s quite natural if that’s the case — the best way to think about this is perhaps in terms of another major effort: that of sequencing the human genome. As you may recall from the late ’90s and early ’00s, this was a huge endeavor undertaken by a large group of scientists and organizations across the globe and over many years. The genome, finished at last, has been instrumental to the diagnosis and understanding of countless conditions, and in the development of drugs and treatments for them.
It was, however, just the beginning of the work in that field — like finishing all the edge pieces of a giant puzzle. And one of the next big projects everyone turned their eyes toward in those years was understanding the human proteome — which is to say all the proteins used by the human body and encoded into the genome.
The problem with the proteome is that it’s much, much more complex. Proteins, like DNA, are sequences of known molecules; in DNA these are the handful of familiar bases (adenine, guanine, etc), but in proteins they are the 20 amino acids (each of which is coded by multiple bases in genes). This in itself creates a great deal more complexity, but it’s only the start. The sequences aren’t simply “code” but actually twist and fold into tiny molecular origami machines that accomplish all kinds of tasks within our body. It’s like going from binary code to a complex language that manifests objects in the real world.
Practically speaking this means that the proteome is made up of not just 20,000 sequences of hundreds of acids each, but that each one of those sequences has a physical structure and function. And one of the hardest parts of understanding them is figuring out what shape is made from a given sequence. This is generally done experimentally using something like x-ray crystallography, a long, complex process that may take months or longer to figure out a single protein — if you happen to have the best labs and techniques at your disposal. The structure can also be predicted computationally, though the process has never been good enough to actually rely on — until AlphaFold came along.
Without going into the whole history of computational proteomics (as much as I’d like to), we essentially went from distributed brute-force tactics 15 years ago — remember Folding@home? — to more honed processes in the last decade. Then AI-based approaches came on the scene, making a splash in 2019 when DeepMind’s AlphaFold leapfrogged every other system in the world — then made another jump in 2020, achieving accuracy levels high enough and reliable enough that it prompted some experts to declare the problem of turning an arbitrary sequence into a 3D structure solved.
I’m only compressing this long history into one paragraph because it was extensively covered at the time, but it’s hard to overstate how sudden and complete this advance was. This was a problem that stumped the best minds in the world for decades, and it went from “we maybe have an approach that kind of works, but extremely slowly and at great cost” to “accurate, reliable, and can be done with off the shelf computers” in the space of a year.
The specifics of DeepMind’s advances and how it achieved them I will leave to specialists in the fields of computational biology and proteomics, who will no doubt be picking apart and iterating on this work over the coming months and years. It’s the practical results that concern us today, as the company employed its time since the publication of AlphaFold 2 (the version shown in 2020) not just tweaking the model, but running it… on every single protein sequence they could get their hands on.
The result is that 98.5 percent of the human proteome is now “folded,” as they say, meaning there is a predicted structure that the AI model is confident enough (and importantly, we are confident enough in its confidence) represents the real thing. Oh, and they also folded the proteome for 20 other organisms, like yeast and E. coli, amounting to about 350,000 protein structures total. It’s by far — by orders of magnitude — the largest and best collection of this absolutely crucial information.
All that will be made available as a freely browsable database that any researcher can simply plug a sequence or protein name into and immediately be provided the 3D structure. The details of the process and database can be found in a paper published today in the journal Nature.
“The database as you’ll see it tomorrow, it’s a search bar, it’s almost like Google search for protein structures,” said Hassabis in an interview with TechCrunch.”You can view it in the 3D visualizer, zoom around it, interrogate the genetic sequence… and the nice thing about doing it with EMBL-EBI is it’s linked to all their other databases. So you can immediately go and see related genes, And it’s linked to all these other databases, you can see related genes, related in other organisms, other proteins that have related functions, and so on.”
“As a scientist myself, who works on an almost unfathomable protein,” said EMBL-EBI’s Edith Heard (she didn’t specify what protein), “it’s really exciting to know that you can find out what the business end of a protein is now, in such a short time — it would have taken years. So being able to access the structure and say ‘aha, this is the business end,’ you can then focus on trying to work out what that business end does. And I think this is accelerating science by steps of years, a bit like being able to sequence genomes did decades ago.”
So new is the very idea of being able to do this that Hassabis said he fully expects the entire field to change — and change the database along with it.
“Structural biologists are not yet used to the idea that they can just look up anything in a matter of seconds, rather than take years to experimentally determine these things,” he said. “And I think that should lead to whole new types of approaches to questions that can be asked and experiments that can be done. Once we start getting wind of that, we may start building other tools that cater to this sort of serendipity: What if I want to look at 10,000 proteins related in a particular way? There isn’t really a normal way of doing that, because that isn’t really a normal question anyone would ask currently. So I imagine we’ll have to start producing new tools, and there’ll be demand for that once we start seeing how people interact with this.”
That includes derivative and incrementally improved versions of the software itself, which has been released in open source along with a great deal of development history. Already we have seen an independently developed system, RoseTTAFold, from researchers at the University of Washington’s Baker Lab, which extrapolated from AlphaFold’s performance last year to create something similar yet more efficient — though DeepMind seems to have taken the lead again with its latest version. But the point was made that the secret sauce is out there for all to use.
Although the prospect of structural bioinformaticians attaining their fondest dreams is heartwarming, it is important to note that there are in fact immediate and real benefits to the work DeepMind and EMBL-EBI have done. It is perhaps easiest to see in their partnership with the Drugs for Neglected Diseases Institute.
The DNDI focuses, as you might guess, on diseases that are rare enough that they don’t warrant the kind of attention and investment from major pharmaceutical companies and medical research outfits that would potentially result in discovering a treatment.
“This is a very practical problem in clinical genetics, where you have a suspected series of mutations, of changes in an affected child, and you want to try and work out which one is likely to be the reason why our child has got a particular genetic disease. And having widespread structural information, I am almost certain will improve the way we can do that,” said DNDI’s Ewan Birney in a press call ahead of the release.
Ordinarily examining the proteins suspected of being at the root of a given problem would be expensive and time-consuming, and for diseases that affect relatively few people, money and time are in short supply when they can be applied to more common problems like cancers or dementia-related diseases. But being able to simply call up the structures of ten healthy proteins and ten mutated versions of the same, insights may appear in seconds that might otherwise have taken years of painstaking experimental work. (The drug discovery and testing process still takes years, but maybe now it can start tomorrow for Chagas disease instead of in 2025.)
Lest you think too much is resting on a computer’s prediction of experimentally unverified results, in another, totally different case, some of the painstaking work had already been done. John McGeehan of the University of Portsmouth, with whom DeepMind partnered for another potential use case, explained how this affected his team’s work on plastic decomposition.
“When we first sent our seven sequences to the DeepMind team, for two of those we already had experimental structures. So we were able to test those when they came back, and it was one of those moments, to be honest, when the hairs stood up on the back of my neck,” said McGeehan. “Because the structures that they produced were identical to our crystal structures. In fact, they contained even more information than the crystal structures were able to provide in certain cases. We were able to use that information directly to develop faster enzymes for breaking down plastics. And those experiments are already underway, immediately. So the acceleration to our project here is, I would say, multiple years.”
The plan is to, over the next year or two, make predictions for every single known and sequenced protein — somewhere in the neighborhood of a hundred million. And for the most part (the few structures not susceptible to this approach seem to make themselves known quickly) biologists should be able to have great confidence in the results.
Inspecting molecular structure in 3D has been possible for decades, but finding that structure in the first place is difficult.
The process AlphaFold uses to predict structures is, in some cases, better than experimental options. And although there is an amount of uncertainty in how any AI model achieves its results, Hassabis was clear that this is not just a black box.
“For this particular case, I think explainability was not just a nice-to-have, which often is the case in machine learning, but it was a must-have, given the seriousness of what we wanted it to be used for,” he said. “So I think we’ve done the most we’ve ever done on a particular system to make the case with explainability. So there’s both explainability on a granular level on the algorithm, and then explainability in terms of the outputs, as well the predictions and the structures, and how much you should or shouldn’t trust them, and which of the regions are the reliable areas of prediction.”
Nevertheless his description of the system as “miraculous” attracted my special sense for potential headline words. Hassabis said that there’s nothing miraculous about the process itself, but rather that he’s a bit amazed that all their work has produced something so powerful.
“This was by far the hardest project we’ve ever done,” he said. “And, you know, even when we know every detail of how the code works, and the system works, and we can see all the outputs, it’s still just still a bit miraculous when you see what it’s doing… that it’s taking this, this 1D amino acid chain and creating these beautiful 3D structures, a lot of them aesthetically incredibly beautiful, as well as scientifically and functionally valuable. So it was more a statement of a sort of wonder.”
The impact of AlphaFold and the proteome database won’t be felt for some time at large, but it will almost certainly — as early partners have testified — lead to some serious short-term and long-term breakthroughs. But that doesn’t mean that the mystery of the proteome is solved completely. Not by a long shot.
As noted above, the complexity of the genome is nothing compared to that of the proteome at a fundamental level, but even with this major advance we have only scratched the surface of the latter. AlphaFold solves a very specific, though very important problem: given a sequence of amino acids, predict the 3D shape that sequence takes in reality. But proteins don’t exist in a vacuum; they’re part of a complex, dynamic system in which they are changing their conformation, being broken up and reformed, responding to conditions, the presence of elements or other proteins, and indeed then reshaping themselves around those.
In fact a great deal of the human proteins AlphaFold gave only a middling level of confidence to its predictions for may be fundamentally “disordered” proteins that are too variable to pin down the way a more static one can be (in which case the prediction would be validated as a highly accurate predictor for that type of protein). So the team has its work cut out for it.
“It’s time to start looking at new problems,” said Hassabis. “Of course, there are many, many new challenges. But the ones you mentioned, protein interaction, protein complexes, ligand binding, we’re working actually on all these things, and we have early, early stage projects on all those topics. But I do think it’s worth taking, you know, a moment to just talk about delivering this big step… it’s something that the computational biology community’s been working on for 20, 30 years, and I do think we have now broken the back of that problem.”
It’s not too late to enjoy an epic pitch-off of global proportion. The Extreme Tech Challenge (XTC) Global Finals start today, July 22 at 9:00 am (PT). Register here for free, get instant access and tune in to see seven phenomenal startups — each one tackling some of the world’s most daunting social and environmental challenges.
The day also includes a keynote address from Beth Bechdol, the deputy director-general, Food and Agriculture Organization (FAO) of the United Nations, and five panel discussions ranging from powering clean energy startups to going green. Here are just two examples, and be sure to check out the event agenda so you don’t miss a minute.
Powering the Future Through Transformative Tech: XTC’s co-founders Young Sohn, Chairman of the Board at HARMAN International, and founding Managing Partner at Walden Catalyst, and Bill Tai, Partner Emeritus at Charles River Ventures jump into the breakthrough tech innovations that are transforming industries to build a radically better world. How can business, government, philanthropy, and the startup community come together to create a better tomorrow? Hear from these industry veterans and thought leaders about how technology can not only shape the future, but also where the biggest opportunities lie, including some exciting news about XTC and the FAO of the United Nations.
Cutting Out Carbon Emitters with Bioengineering: Bioengineering may soon provide compelling, low-carbon alternatives in industries where even the best methods produce significant emissions. By utilizing natural and engineered biological processes, we may soon have low-carbon textiles from Algiknit, lab-grown premium meats from Orbillion and fuels captured from waste emissions via LanzaTech. Leaders from these companies will join our panel to talk about how bioengineering can do its part in the fight against climate change.
The main event is, of course, the pitch competition. More than 3,700 startups applied, and these are the seven finalists who will compete one last time for the title of XTC 2021 champion.
In addition to choosing the winner of XTC 2021, the esteemed judges will announce the winners of the COVID-19 Innovation award, the Female Founder award, the Ethical AI award and the People’s Choice award.
Tailor Brands, a startup that automates parts of the branding and marketing process for small businesses, announced Thursday it has raised $50 million in Series C funding.
GoDaddy led the round as a strategic partner and was joined by OurCrowd and existing investors Pitango Growth, Mangrove Capital Partners, Armat Group, Disruptive VC and Whip Media founder Richard Rosenblatt. Tailor Brands has now raised a total of $70 million since its inception in 2015.
“GoDaddy is empowering everyday entrepreneurs around the world by providing all of the help and tools to succeed online,” said Andrew Morbitzer, vice president of corporate development at GoDaddy, in a written statement. “We are excited to invest in Tailor Brands — and its team — as we believe in their vision. Their platform truly helps entrepreneurs start their business quickly and easily with AI-powered logo design and branding services.”
When Tailor Brands, which launched at TechCrunch’s Startup Battlefield in 2014, raised its last round, a $15.5 million Series B, in 2018, the company was focused on AI-driven logo creation.
The company, headquartered in New York and Tel Aviv, is now compiling the components for a one-stop SaaS platform — providing the design, branding and marketing services a small business owner needs to launch and scale operations, and within minutes, Yali Saar, co-founder and CEO of Tailor Brands told TechCrunch.
Over the past year, more users are flocking to Tailor Brands; the company is onboarding some 700,000 new users per month for help in the earliest stages of setting up their business. In fact, the company saw a 27% increase in new business incorporations as the creator and gig economy gained traction in 2020, Saar said.
In addition to the scores of new users, the company crossed 30 million businesses using the platform. At the end of 2019, Tailor Brands started monetizing its offerings and “grew at a staggering rate,” Saar added. The company yielded triple-digit annual growth in revenue.
To support that growth, the new funding will be used on R&D, to double the team and create additional capabilities and functions. There may also be future acquisition opportunities on the table.
Saar said Tailor Brands is at a point where it can begin leveraging the massive amount of data on small businesses it gathers to help them be proactive rather than reactive, turning the platform into a “consultant of sorts” to guide customers through the next steps of their businesses.
“Users are looking for us to provide them with everything, so we are starting to incorporate more products with the goal of creating an ecosystem, like WeChat, where you don’t need to leave the platform at all to manage your business,” Saar said.
Sorry Mr. Putin, but there’s a race on for Russian and Eastern European founders. And right now, those awful capitalists in the corrupt West are starting to out-gun the opposition! But seriously… only the other day a $100 million fund aimed at Russian speaking entrepreneurs appeared, and others are proliferating.
Now, London-based Untitled Ventures plans to join their fray with a €100 million / $118M for its second fund to invest in “ambitious deep tech startups with eastern European founders.”
Untitled says it is aiming at entrepreneurs who are looking to relocate their business or have already HQ’ed in Western Europe and the USA. That’s alongside all the other existing Western VCs who are – in my experience – always ready and willing to listen to Russian and Eastern European founders, who are often known for their technical prowess.
Untitled is going to be aiming at B2B, AI, agritech, medtech, robotics, and data management startups with proven traction emerging from the Baltics, CEE, and CIS, or those already established in Western Europe
LPs in the fund include Vladimir Vedeenev, a founder of Global Network Management>. Untitled also claims to have Google, Telegram Messenger, Facebook, Twitch, DigitalOcean, IP-Only, CenturyLinks, Vodafone and TelecomItaly as partners.
Oskar Stachowiak, Untitled Ventures Managing Partner, said: “With over 10 unicorns, €1Bn venture funding in 2020 alone, and success stories like Veeam, Semrush, and Wrike, startups emerging from the fast-growing regions are the best choice to focus on early-stage investment for us. Thanks to the strong STEM focus in the education system and about one million high-skilled developers, we have an ample opportunity to find and support the rising stars in the region.”
Konstantin Siniushin, the Untitled Ventures MP said: “We believe in economic efficiency and at the same time we fulfill a social mission of bringing technological projects with a large scientific component from the economically unstable countries of the former USSR, such as, first of all, Belarus, Russia and Ukraine, but not only in terms of bringing sales to the world market and not only helping them to HQ in Europe so they can get next rounds of investments.”
He added: “We have a great experience accumulated earlier in the first portfolio of the first fund, not just structuring business in such European countries as, for example, Luxembourg, Germany, Great Britain, Portugal, Cyprus and Latvia, but also physically relocating startup teams so that they are perceived already as fully resident in Europe and globally.”
To be fair, it is still harder than it needs to be to create large startups from Eastern Europe, mainly because there is often very little local capital. However, that is changing, with the launch recently of CEE funds such as Vitosha Venture Partners and Launchub Ventures, and the breakout hit from Romania that was UIPath.
The Untitled Ventures team:
• Konstantin Siniushin, a serial tech entrepreneur
• Oskar Stachowiak, experienced fund manager
• Mary Glazkova, PR & Comms veteran
• Anton Antich, early stage investor and an ex VP of Veeam, a Swiss cloud data management company
acquired by Insight Venture Partners for $5bln
• Yulia Druzhnikova, experienced in taking tech companies international
• Mark Cowley, who has worked on private and listed investments within CEE/Russia for over 20 years
Untitled Ventures portfolio highlights – Fund I
• Sizolution: AI-driven size prediction engine, based in Germany
• Pure app – spontaneous and impersonal dating app, based in Portugal
• Fixar Global – efficient drones for commercial use-cases, based in Latvia,
• E-contenta – based in Poland
• SuitApp – AI based mix-and-match suggestions for fashion retail, based in Singapore
• Sarafan.tech, AI-driven recognition, based in the USA
• Hello, baby – parental assistant, based in the USA
• Voximplant – voice, video and messaging cloud communication platform, based in the USA (exited)
If you want to get the most value out of attending TC Sessions: SaaS 2021, a day-long deep dive into the rapidly changing and expanding world of software-as-a-service, don’t go it alone — take your team. It’s a smart way to cover more ground on October 27, make more connections and increase your ROI.
We’re talking a sweet group discount, people. The early-bird pricing won’t remain in play forever, so get your group passes now and cross that money-saving task off your to-do list before the prices go up.
TC Sessions is where community meets opportunity. Each event focuses on a specific tech sector, and it’s a chance for everyone within that ecosystem to learn about the latest trends, hear from the leading experts, founders, investors and other visionaries and, of course, network.
Expect nothing less from TC Sessions: SaaS. We’re nailing down the agenda and building out a roster of impressive speakers. Does that describe you? Apply here to speak if you want to share your vast knowledge.
We’ll be announcing plenty more speakers in the coming weeks. Here’s a perfect of example. Databricks co-founder and CEO, Ali Ghodsi will grace our virtual stage to talk, among other things, about the future of data management in AI.
Pro tip: Keep your finger on the pulse TC Sessions: SaaS. Get updates when we announce new speakers, add events and offer ticket discounts.
Why should you carve a day our of your hectic schedule to attend TC Sessions: SaaS? This may be the first year we’ve focused on SaaS, but this ain’t our first rodeo. Here’s what other attendees have to say about their TC Sessions experience.
“TC Sessions: Mobility offers several big benefits. First, networking opportunities that result in concrete partnerships. Second, the chance to learn the latest trends and how mobility will evolve. Third, the opportunity for unknown startups to connect with other mobility companies and build brand awareness.” — Karin Maake, senior director of communications at FlashParking.
“People want to be around what’s interesting and learn what trends and issues they need to pay attention to. Even large companies like GM and Ford were there, because they’re starting to see the trend move toward mobility. They want to learn from the experts, and TC Sessions: Mobility has all the experts.” — Melika Jahangiri, vice president at Wunder Mobility.
Is your company interested in sponsoring or exhibiting at TC Sessions: SaaS 2021 – Marketing & Fundraising? Contact our sponsorship sales team by filling out this form.
California-grown automotive software company Sonatus raised $35 million in a Series A round that attracted high-profile technology and automotive industry companies including Hyundai Motor Group, SAIC Capital, LG Electronics and Hyundai Mobis.
Silicon Valley VC Translink Capital led the round, with other investors including Marvell, SK hynix, United Microelectronics Corporation (UMC), Mando Corporation and Wanxiang Group Company.
Sonatus, which was founded in 2018, intends to use the new funds to establish itself as a brand through marketing efforts, new partnerships with OEMs and expanding local teams, according to Jeff Chou, Sonatus’s CEO and co-founder. The startup says its product helps to make vehicles into “data centers on wheels” by providing the underlying infrastructure that allows for big data collection, running new applications or adding new features to the car.
“Basically we have two pieces of our product – an in-vehicle portion and a cloud portion, and they kind of work together,” Chou told TechCrunch. “The in-vehicle part of our product allows the OEM to collect any data that the car generates. So whether that be on a traditional [Controller Area Network] bus, whether that be in the infotainment head unit, whether that be on traffic that’s flowing across an in-vehicle network like Ethernet. Anything that gets generated or transmitted across an in-vehicle network is data that we have access to. And depending on what the OEM wants to collect and when they want to collect it, they can inform our software in the vehicle to do the right thing.”
The cloud portion of the software connects to all the vehicles where Sonatus’s underlying architecture resides and ingests all the data so it can store, analyze or expose it to either the OEM’s own data scientists or to their partners.
Sonatus says its first generation product is already in production with a top global automaker, which will be announced in the coming weeks.
“We actually built the company without any investment money at all, and we grew from a couple of us to now beyond 50 people,” said Chou. “And now we’re already launching and in mass production. Our software has already been incorporated into the vehicles of an OEM and in their dealer showrooms.”
Chou said the first incarnation of Sonatus’ product will be in a combustion engine vehicle, but that the product is drivetrain-agnostic. In fact, Chou thinks the electrification of vehicles has been a tailwind for the company because it’s causing OEMs to rethink in-vehicle architecture and be more open to adopting new technologies.
One of Sonatus’ investors in this round, Hyundai has been pouring money into auto-related technologies. The automaker has invested $20.5 billion (KRW 23.5 trillion) in future technologies for its vehicles, including electrification, connectivity, autonomous driving, fuel cell, UAM, AI and robotics through 2025, according Henry Chung, SVP and head of Hyundai CRADLE Silicon Valley.
“A lot of the technology in our cars is probably 50 years old in some areas, especially on the comms side of things,” Chung told TechCrunch. “There’s four or five decades worth of data center evolution that’s occurred on the IT front, and those technologies and approaches basically need to be brought into vehicles now because of the amount of data that’s being generated, the sensors, the software algorithms that are running, the compute power that’s now involved. They literally are super computers on wheels. We’re asking vehicles to do more and consumers at are asking for services at a greater clip as well, so in order to deliver those services and value added functions, all of that needs supporting infrastructure and that’s what Sonatus delivers essentially. This is long overdue.”
One of the capabilities that may soon be realized in the automotive industry is Vehicle-to-Everything (V2X) technology, in which the vehicle communicates to other vehicles and surrounding infrastructure to provide better driver assistance systems, which could lead to autonomous driving one day. Sonatus says it provides the architecture upon which V2X can be utilized.
“That’s part of this edge cloud architecture that we’re delivering for vehicles, but in this case the edge instead of being data center, it’s really an edge on wheels,” said Chou.
Sonatus’s data center, which the company says is incredibly secure, and its architecture allow automakers to remotely add in features, manage vehicle usage data and remedy problems quicker and more efficiently because it doesn’t use over-the-air software updates that require time and a full upload. Rather, automakers can send the software specific messages to enact changes in real time.
“Imagine a scenario where on the fly somebody detects that there could be something wrong with brakes for vehicles that they shipped in North America, and they need to send out an update right away to get real time data on braking and engine of certain models when an accident occurs,” said Chou. “They might say, ‘Send me information 60 seconds before the accident and 60 seconds after and I only want this information.’ That can be done in real time, without an OTA update. It’s what we call codeless updates.”
There are many use cases where OEMs will benefit from having access to so much data, especially as they continue to innovate. For Sonatus’s part, the company wants to move in the direction wherein it can also collect and analyze the data from automakers to perhaps use records of driver behavior in the development of autonomous technology, but Chou said they’re not doing anything about it just yet.
Amazon is giving its Alexa voice platform a shot in the arm after seeing further declines in skill growth over the past year, indicating lagging interest from third-party voice app developers. At the company’s Alexa Live developer event today, the company announced a slew of new features and tools for the developer community — its largest release of new tools to date, in fact. Among the new releases are those to encourage Alexa device owners to discover and engage with Alexa skills, new tools for making money from skills, and other updates that will push customers to again make Alexa more a part of their daily routines.
The retailer’s hopes for Alexa as voice shopping platform may have not panned out as it had hoped, as only a sliver of Alexa customers actually made Amazon.com purchases through the smart speakers. However, the larger Alexa footprint and developer community remains fairly sizable, Amazon said today, noting there are “millions” of Alexa devices used “billions of times” every week, and over 900,000 registered developers who have published over 130,000 Alexa skills.
However, Amazon hasn’t yet solved the challenge of helping customers find and discover skills they want to use — something that’s been historically difficult on voice-only devices. That’s improved somewhat with the launch of Alexa devices with screens, like the Alexa Show, which offers a visual component.
Image Credits: Amazon
On this front, Amazon says it will introduce a way for developers to create Widgets for their skills which customers can then add to their Echo Show or other Alexa device with a screen sometime later this year. Developers will also be able to build Featured Skill Cards to promote their skills in the home screen rotation.
For voice-only devices, developers will now be able to have their skill suggested when Alexa responds to common requests, like “Alexa, tell me a story,” “Alexa, let’s play a game,” or “Alexa, I need a workout,” among others. Alexa will begin to offer personalized skill suggestions based on customers’ use of similar skills, while new “contextual discovery” mechanisms that will allow customers to use natural language and phrases to accomplish tasks across skills.
Amazon also said it’s expanding the ways developers can get paid for their skills.
Already, it offers tools like consumables, paid subscriptions and in-skill purchases. Now, it will add support for Paid Skills, a new in-skill purchase that allows customers to pay a one-time fee to access the content a skill provides. It will also now expand in-skill purchases to India and Canada.
Amazon will attempt to leverage the developer community to drive sales on its retail site, too. With new Shopping Actions, developers can sell Amazon products in their skill. For example, a role-playing game could suggest customers buy the tabletop version, as sci-fi game Starfinder does. Developers can also earn affiliate revenue on their product referrals.
Music and media skill developers will be able to use new tools for more entertaining experiences, like a Song Request Skill that DJs can use to take song requests via Alexa, which iHeartRadio will adopt. Others will shorten the time it takes for Radio, Podcast and Music providers to launch interactive experiences.
Other new features aim to make skills more practical and useful.
Image Credits: Amazon
For example, restaurants will gain access to a Food Skill API that will allow them to create pickup and delivery order experiences. A new “Send to Phone” feature will allow developers to connect their skill with mobile devices, and new event-based triggers and proactive suggestions will enable new experiences — like a skill that reminds users to lock their home when they are leaving. Amazon-owned Whole Foods will use these features for a curbside pickup experience arriving later this year, the company says.
Alexa replenishment support, which allows customers to reorder common household items like laundry detergent or batteries, will also expand to replacement parts to better tie in with other sorts of household and smart home devices. Thermostat makers Carrier and Resideo will use this to replenish air filters and Bissell will use this with its vacuum cleaners.
Menahile, safety device makers — like smoke, carbon-monoxide, and water leak detectors — will be able to tie into Alexa’s security system, Alexa Guard to send notifications to mobile devices.
Amazon is also introducing a set of new tools that make creating skills easier for developers, including the ability to use Alexa Entities, which is basically Amazon’s own set of general, Wikipedia-like knowledge. They’ll also gain access to new tools to aid with custom pronunciations plus the previously U.S.-only Alexa Conversations nature language feature (now in beta in Germany, developer preview in Japan, and live all English locales). A longer list of tools, detailed on Amazon’s announcement, focus on regional expansions of existing toolkits (i.e. AVS, ACK), and others that enable better interoperability with smart home devices — like those that allow for unique wake words, among others.
DNSFilter, as its name suggests, offers DNS-based web content filtering and threat protection. Unlike the majority of its competitors, which includes the likes of Palo Alto Networks and Webroot, the startup uses proprietary AI technology to continuously scan billions of domains daily, identifying anomalies and potential vectors for malware, ransomware, phishing, and fraud.
“Most of our competitors either rent or lease a database from some third party,” Ken Carnesi, co-founder and CEO of DNSFilter tells TechCrunch. “We do that in-house, and it’s through artificial intelligence that’s scanning these pages in real-time.”
The company, which counts the likes of Lenovo, Newegg, and Nvidia among its 14,000 customers, claims this industry-first technology catches threats an average of five days before competitors and is capable of identifying 76% of domain-based threats. By the end of 2021, DNSFilter says it will block more than 1.1 million threats daily.
DNSFilter has seen rapid growth over the past 12 months as a result of the mass shift to remote working and the increase in cyber threats and ransomware attacks that followed. The startup saw eightfold growth in customer activity, doubled its global headcount to just over 50 employees, and partnered with Canadian software house N-Able to push into the lucrative channel market.
“DNSFilter’s rapid growth and efficient customer acquisition are a testament to the benefits and ease of use compared to incumbents,” Thomas Krane, principal at Insight Partners, who has been appointed as a director on DNSFilter’s board. “The traditional model of top-down, hardware-centric network security is disappearing in favor of solutions that readily plug in at the device level and can cater to highly distributed workforces”
Prior to this latest funding round, which was also backed by Arthur Ventures (the lead investor in DNSFilter’s seed round), CrowdStrike co-founder and former chief technology officer Dmitri Alperovitch also joined DNSFilter’s board of directors.
Carnesi said the addition of Alperovitch to the board will help the company get its technology into the hands of enterprise customers. “He’s helping us to shape the product to be a good fit for enterprise organizations, which is something that we’re doing as part of this round — shifting focus to be primarily mid-market and enterprise,” he said.
The company also recently added former CrowdStrike vice president Jen Ayers as its chief operating officer. “She used to manage their entire managed threat hunting team, so she’s definitely coming on for the security side of things as we build out our domain intelligence team further,” Carnesi said.
With its newly-raised funds, DNSFilter will further expand its headcount, with plans to add more than 80 new employees globally over the next 12 months.
“There’s a lot more that we can do for security via DNS, and we haven’t really started on that yet,” Carnesi said. “We plan to do things that people won’t believe were possible via DNS.”
The company, which acquired Web Shrinker in 2018, also expects there to be more acquisitions on the cards going forward. “There are some potential companies that we’d be looking to acquire to speed up our advancement in certain areas,” Carnesi said.
Maine has joined a growing number of cities, counties and states that are rejecting dangerously biased surveillance technologies like facial recognition.
The new law, which is the strongest statewide facial recognition law in the country, not only received broad, bipartisan support, but it passed unanimously in both chambers of the state legislature. Lawmakers and advocates spanning the political spectrum — from the progressive lawmaker who sponsored the bill to the Republican members who voted it out of committee, from the ACLU of Maine to state law enforcement agencies — came together to secure this major victory for Mainers and anyone who cares about their right to privacy.
Maine is just the latest success story in the nationwide movement to ban or tightly regulate the use of facial recognition technology, an effort led by grassroots activists and organizations like the ACLU. From the Pine Tree State to the Golden State, national efforts to regulate facial recognition demonstrate a broad recognition that we can’t let technology determine the boundaries of our freedoms in the digital 21st century.
Facial recognition technology poses a profound threat to civil rights and civil liberties. Without democratic oversight, governments can use the technology as a tool for dragnet surveillance, threatening our freedoms of speech and association, due process rights, and right to be left alone. Democracy itself is at stake if this technology remains unregulated.
Facial recognition technology poses a profound threat to civil rights and civil liberties.
We know the burdens of facial recognition are not borne equally, as Black and brown communities — especially Muslim and immigrant communities — are already targets of discriminatory government surveillance. Making matters worse, face surveillance algorithms tend to have more difficulty accurately analyzing the faces of darker-skinned people, women, the elderly and children. Simply put: The technology is dangerous when it works — and when it doesn’t.
But not all approaches to regulating this technology are created equal. Maine is among the first in the nation to pass comprehensive statewide regulations. Washington was the first, passing a weak law in the face of strong opposition from civil rights, community and religious liberty organizations. The law passed in large part because of strong backing from Washington-based megacorporation Microsoft. Washington’s facial recognition law would still allow tech companies to sell their technology, worth millions of dollars, to every conceivable government agency.
In contrast, Maine’s law strikes a different path, putting the interests of ordinary Mainers above the profit motives of private companies.
Maine’s new law prohibits the use of facial recognition technology in most areas of government, including in public schools and for surveillance purposes. It creates carefully carved out exceptions for law enforcement to use facial recognition, creating standards for its use and avoiding the potential for abuse we’ve seen in other parts of the country. Importantly, it prohibits the use of facial recognition technology to conduct surveillance of people as they go about their business in Maine, attending political meetings and protests, visiting friends and family, and seeking out healthcare.
In Maine, law enforcement must now — among other limitations — meet a probable cause standard before making a facial recognition request, and they cannot use a facial recognition match as the sole basis to arrest or search someone. Nor can local police departments buy, possess or use their own facial recognition software, ensuring shady technologies like Clearview AI will not be used by Maine’s government officials behind closed doors, as has happened in other states.
Maine’s law and others like it are crucial to preventing communities from being harmed by new, untested surveillance technologies like facial recognition. But we need a federal approach, not only a piecemeal local approach, to effectively protect Americans’ privacy from facial surveillance. That’s why it’s crucial for Americans to support the Facial Recognition and Biometric Technology Moratorium Act, a bill introduced by members of both houses of Congress last month.
The ACLU supports this federal legislation that would protect all people in the United States from invasive surveillance. We urge all Americans to ask their members of Congress to join the movement to halt facial recognition technology and support it, too.