FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — December 1st 2020Your RSS feeds

AWS announces Panorama a device adds machine learning technology to any camera

By Jonathan Shieber

AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.

Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.

Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.

Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.

Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.

As we wrote in 2018:

DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

 

Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.

And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.

Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.

 

Before yesterdayYour RSS feeds

Google-backed Chinese truck-hailing firm Manbang raises $1.7 billion

By Rita Liao

The Chinese Uber for trucks Manbang announced Tuesday that it has raised $1.7 billion in its latest funding round, two years after it hauled in $1.9 billion from investors including SoftBank Group and Alphabet Inc’s venture capital fund CapitalG.

The news came fresh off a Wall Street Journal report two weeks ago that Manbang was seeking $1 billion ahead of an initial public offering next year. The company declined to comment on the matter, though its CEO Zhang Hui said in May 2019 that the firm was “not in a rush” to go public.

Manbang said it achieved profitability this year. Its valuation was reportedly on course to reach $10 billion in 2018.

The company, which runs an app matching truck drivers and merchants transporting cargo and provides financial services to truckers, was formed from a merger between rivals Yunmanman and Huochebang in 2017. It was a time when China’s “sharing economy” craze began to see consolidation and shakeup.

The latest financing again attracted high-profile backers, including returning investors SoftBank Vision Fund and Sequoia Capital China, Permira and Fidelity, a consortium that co-led the round. Other participants were Hillhouse Capital, GGV Capital, Lightspeed China Partners, Tencent, Jack Ma’s YF Capital and more.

The company has other Alibaba ties. Its CEO Zhang, who founded Yunmanman, hailed from Alibaba’s famed B2B department where Manbang chairman Wang Gang also worked before he went on to fund ride-hailing giant Didi’s angel round.

Manbang claims its platform has over 10 million verified drivers and 5 million cargo owners. The latest funding will allow it to further invest in research and development, upgrade its matching system, and expand its service capacity to functions like door-to-door transportation.

Sequoia is quite bullish about truck-hailing as it made its sixth investment in Manbang. For Permira, a European private equity fund, the Manbang investment marked the China debut of its Growth Opportunities Fund.

Xesto is a foot-scanning app that simplifies shoe gifting

By Natasha Lomas

You wait ages for foot-scanning startups to help with the tricky fit issue that troubles online shoe shopping and then two come along at once: Launching today in time for Black Friday sprees is Xesto — which, like Neatsy, which we wrote about earlier today, also makes use of the iPhone’s TrueDepth camera to generate individual 3D foot models for shoe size recommendations.

The Canadian startup hasn’t always been focused on feet. It has a long-standing research collaboration with the University of Toronto, alma mater of its CEO and co-founder Sophie Howe (its other co-founder and chief scientist, Afiny Akdemir, is also pursuing a Math PhD there) — and was actually founded back in 2015 to explore business ideas in human computer interaction.

But Howe tells us it moved into mobile sizing shortly after the 2017 launch of the iPhone X — which added a 3D depth camera to Apple’s smartphone. Since then Apple has added the sensor to additional iPhone models, pushing it within reach of a larger swathe of iOS users. So you can see why startups are spying a virtual fit opportunity here.

“This summer I had an aha! moment when my boyfriend saw a pair of fancy shoes on a deep discount online and thought they would be a great gift. He couldn’t remember my foot length at the time, and knew I didn’t own that brand so he couldn’t have gone through my closet to find my size,” says Howe. “I realized in that moment shoes as gifts are uncommon because they’re so hard to get correct because of size, and no one likes returning and exchanging gifts. When I’ve bought shoes for him in the past, I’ve had to ruin the surprise by calling him — and I’m not the only one. I realized in talking with friends this was a feature they all wanted without even knowing it… Shoes have such a cult status in wardrobes and it is time to unlock their gifting potential!”

Howe slid into this TechCrunch writer’s DMs with the eye-catching claim that Xesto’s foot-scanning technology is more accurate than Neatsy’s — sending a Xesto scan of her foot compared to Neatsy’s measure of it to back up the boast. (Aka: “We are under 1.5 mm accuracy. We compared against Neatsy right now and they are about 1.5 cm off of the true size of the app,” as she put it.)

Another big difference is Xesto isn’t selling any shoes itself. Nor is it interested in just sneakers; it’s shoe-type agnostic. If you can put it on your feet it wants to help you find the right fit, is the idea.

Right now the app is focused on the foot-scanning process and the resulting 3D foot models — showing shoppers their feet in a 3D point cloud view, another photorealistic view as well as providing granular foot measurements.

There’s also a neat feature that lets you share your foot scans so, for example, a person who doesn’t have their own depth-sensing iPhone could ask to borrow a friend’s to capture and takeaway scans of their own feet.

Helping people who want to be bought (correctly fitting) shoes as gifts is the main reason they’ve added foot-scan sharing, per Howe — who notes shoppers can create and store multiple foot profiles on an account “for ease of group shopping”.

“Xesto is solving two problems: Buying shoes [online] for yourself, and buying shoes for someone else,” she tells TechCrunch. “Problem 1: When you buy shoes online, you might be unfamiliar with your size in the brand or model. If you’ve never bought from a brand before, it is very risky to make a purchase because there is very limited context in selecting your size. With many brands you translate your size yourself.

“Problem 2: People don’t only buy shoes for themselves. We enable gift and family purchasing (within a household or remote!) by sharing profiles.”

Xesto is doing its size predictions based on comparing a user’s (<1.5mm accurate) foot measurements to brands’ official sizing guidelines — with more than 150 shoe brands currently supported.

Howe says it plans to incorporate customer feedback into these predictions — including by analyzing online reviews where people tend to specify if a particular shoe size is larger or smaller than expected. So it’s hoping to be able to keep honing the model’s accuracy.

“What we do is remove the uncertainty of finding your size by taking your 3D foot dimensions and correlate that to the brands sizes (or shoe model, if we have them),” she says. “We use the brands size guides and customer feedback to make the size recommendations. We have over 150 brands currently supported and are continuously adding more brands and models. We also recommend if you have extra wide feet you read reviews to see if you need to size up (until we have all that data robustly gathered).”

Asked about the competitive landscape, given all this foot-scanning action, Howe admits there’s a number of approaches trying to help with virtual shoe fit — such as comparative brand sizing recommendations or even foot scanning with pieces of paper. But she argues Xesto has an edge because of the high level of detail of its 3D scans — and on account of its social sharing feature. Aka this is an app to make foot scans you can send your bestie for shopping keepsies.

“What we do that is unique is only use 3D depth data and computer vision to create a 3D scan of the foot with under 1.5mm accuracy (unmatched as far as we’ve seen) in only a few minutes,” she argues. “We don’t ask you any information about your feet, or to use a reference object. We make size recommendations based on your feet alone, then let you share them seamlessly with loved ones. Size sharing is a unique feature we haven’t seen in the sizing space that we’re incredibly excited about (not only because we will get more shoes as gifts :D).”

Xesto’s iOS app is free for shoppers to download. It’s also entirely free to create and share your foot scan in glorious 3D point cloud — and will remain so according to Howe. The team’s monetization plan is focused on building out partnerships with retailers, which is on the slate for 2021.

“Right now we’re not taking any revenue but next year we will be announcing partnerships where we work directly within brands ecosystems,” she says, adding: “[We wanted to offer] the app to customers in time for Black Friday and the holiday shopping season. In 2021, we are launching some exciting initiatives in partnership with brands. But the app will always be free for shoppers!”

Since being founded around five years ago, Howe says Xesto has raised a pre-seed round from angel investors and secured national advanced research grants, as well as taking in some revenue over its lifetime. The team has one patent granted and one pending for their technologies, she adds.

If you didn’t make $1B this week, you are not doing VC right

By Danny Crichton

The only thing more rare than a unicorn is an exited unicorn.

At TechCrunch, we cover a lot of startup financings, but we rarely get the opportunity to cover exits. This week was an exception though, as it was exitpalooza as Affirm, Roblox, Airbnb, and Wish all filed to go public. With DoorDash’s IPO filing last week, this is upwards of $100 billion in potential float heading to the public markets as we make our way to the end of a tumultuous 2020.

All those exits raise a simple question – who made the money? Which VCs got in early on some of the biggest startups of the decade? Who is going to be buying a new yacht for the family for the holidays (or, like, a fancy yurt for when Burning Man restarts)? The good news is that the wealth is being spread around at least a couple of VC firms, although there are definitely a handful of partners who are looking at a very, very nice check in the mail compared to others.

So let’s dive in.

I’ve covered DoorDash’s and Airbnb’s investor returns in-depth, so if you want to know more about those individual returns, feel free to check those analyses out. But let’s take a more panoramic perspective of the returns of these five companies as a whole.
First, let’s take a look at the founders. These are among the very best startups ever built, and therefore, unsurprisingly, the founders all did pretty well for themselves. But there are pretty wide variations that are interesting to note.

First, Airbnb — by far — has the best return profile for its founders. Brian Chesky, Nathan Blecharczyk, and Joe Gebbia together own nearly 42% of their company at IPO, and that’s after raising billions in venture capital. The reason for their success is simple: Airbnb may have had some tough early innings when it was just getting started, but once it did, its valuation just skyrocketed. That helped to limit dilution in its earlier growth rounds, and ultimately protected their ownership in the company.

David Baszucki of Roblox and Peter Szulczewski of Wish both did well: they own 12% and about 19% of their companies, respectively. Szulczewski’s co-founder Sheng “Danny” Zhang, who is Wish’s CTO, owns 4.9%. Eric Cassel, the co-founder of Roblox, did not disclose ownership in the company’s S-1 filing, indicating that he doesn’t own greater than 5% (the SEC’s reporting threshold).

DoorDash’s founders own a bit less of their company, mostly owing to the money-gobbling nature of that business and the sheer number of co-founders of the company. CEO Tony Xu owns 5.2% while his two co-founders Andy Fang and Stanley Tang each have 4.7%. A fourth co-founder Evan Moore didn’t disclose his share totals in the company’s filing.

Finally, we have Affirm . Affirm didn’t provide total share counts for the company, so it’s hard right now to get a full ownership picture. It’s also particularly hard because Max Levchin, who founded Affirm, was a well-known, multi-time entrepreneur who had a unique shareholder structure from the beginning (many of the venture firms on the cap table actually have equal proportions of common and preferred shares). Levchin has more shares all together than any of his individual VC investors — 27.5 million shares, compared to the second largest investor, Jasmine Ventures (a unit of Singapore’s GIC) at 22 million shares.

Neatsy wants to reduce sneaker returns with 3D foot scans

By Natasha Lomas

U.S.-based startup Neatsy AI is using the iPhone’s depth-sensing FaceID selfie camera as a foot scanner to capture 3D models for predicting a comfortable sneaker fit.

Its app, currently soft launched for iOS but due to launch officially next month, asks the user a few basic questions about sneaker fit preference before walking through a set of steps to capture a 3D scan of their feet using the iPhone’s front-facing camera. The scan is used to offer personalized fit predictions for a selection of sneakers offered for sale in-app — displaying an individualized fit score (out of five) in green text next to each sneaker model.

Shopping for shoes online can lead to high return rates once buyers actually get to slip on their chosen pair, since shoe sizing isn’t standardized across different brands. That’s the problem Neatsy wants its AI to tackle by incorporating another more individual fit signal into the process.

The startup, which was founded in March 2019, has raised $400K in pre-seed funding from angel investors to get its iOS app to market. The app is currently available in the US, UK, Germany, France, Italy, Spain, Netherlands, Canada and Russia. 

Neatsy analyzes app users’ foot scans using a machine learning model it’s devised to predict a comfy fit across a range of major sneaker brands — currently including Puma, Nike, Jordan Air and Adidas — based on scanning the insoles of sneakers, per CEO and founder Artem Semyanov.

He says they’re also factoring in the material shoes are made of and will be honing the algorithm on an ongoing basis based on fit feedback from users. (The startup says it’s secured a US patent for its 3D scanning tech for shoe recommendations.)

The team tested the algorithm’s efficiency via some commercial pilots this summer — and say they were able to demonstrate a 2.7x reduction in sneaker return rates based on size, and a 1.9x decrease in returns overall, for a focus group with 140 respondents.

Handling returns is clearly a major cost for online retailers — Neatsy estimates that sneaker returns specifically rack up $30BN annually for ecommerce outlets, factoring in logistics costs and other factors like damaged boxes and missing sneakers.

“All in all, shoe ecommerce returns vary among products and shops between 30% and 50%. The most common reasons for this category are fit & size mismatch,” says Semyanov, who headed up the machine learning team at Prism Labs prior to founding Neatsy.

“According to Zappos, customers who purchase its most expensive footwear ultimately return ~50% of everything they buy. 70% online shoppers make returns each year. Statista estimates return deliveries will cost businesses $550 billion by 2020,” he tells us responding to questions via email.

“A 2019 survey from UPS found that, for 73% of shoppers, the overall returns experience impacts how likely they are to purchase from a given retailer again, and 68% say the experience impacts their overall perceptions of the retailer. That’s the drama here!

“Retailers are forced to accept steep costs of returns because otherwise, customers won’t buy. Vs us who want to treat the main reasons of returns rather than treating the symptoms.”

While ecommerce giants like Amazon address this issue by focusing on logistics to reducing friction in the delivery process, speeding up deliveries and returns so customers spend less time waiting to get the right stuff, scores of startups have been trying to tackle size and fit with a variety of digital (and/or less high tech) tools over the past five+ years — from 3D body models to ‘smart’ sizing suits or even brand- and garment-specific sizing tape (Nudea‘s fit tape for bras) — though no one has managed to come up with a single solution that works for everything and everyone. And a number of these startups have deadpooled or been acquired by ecommerce platforms without a whole lot to show for it.

While Neatsy is attempting to tackle what plenty of other founders have tried to do on the fit front, it is at least targeting a specific niche (sneakers) — a relatively narrow focus that may help it hone a useful tool.

It’s also able to lean on mainstream availability of the iPhone’s sensing hardware to get a leg up. (Whereas a custom shoe design startup that’s been around for longer, Solely Original, has offered custom fit by charging a premium to send out an individual fit kit.)

But even zeroing in on sneaker comfort, Neatsy’s foot scanning process does require the user to correctly navigate quite a number of steps (see the full flow in the below video). Plus you need to have a pair of single-block colored socks handy (stripy sock lovers are in trouble). So it’s not a two second process, though the scan only has to be done once.

At the time of writing we hadn’t been able to test Neatsy’s scanning process for ourselves as it requires an iPhones with a FaceID depth-sensing camera. On this writer’s 2nd-gen iPhone SE, the app allowed me to swipe through each step of the scan instruction flow but then hung at what should have been the commencement of scanning — displaying a green outline template of a left foot against a black screen.

This is a bug the team said they’ll be fixing so the scanner gets turned off entirely for iPhone models that don’t have the necessary hardware. (Its App Store listing states its compatible with iPhone SE (2nd generation), though doesn’t specify the foot scan feature isn’t.) 

While the current version of Neatsy’s app is a direct to consumer ecommerce play, targeting select sneaker models at app savvy Gen Z/Millennials, it’s clearly intended as a shopfront for retailers to check out the technology.

When as ask about this Semyanov confirms its longer term ambition is for its custom fit model to become a standard piece of the ecommerce puzzle.

“Neatsy app is our fastest way to show the world our vision of what the future online shop should be,” he tells TechCrunch. “It attracts users to shops and we get revenue share when users buy sneakers via us. The app serves as a new low-return sales channel for a retailer and as a way to see the economic effect on returns by themselves.

“Speaking long term we think that our future is B2B and all ecommerce shops would eventually have a fitting tech, we bet it will be ours. It will be the same as having a credit card payment integration in your online shop.”

Hover secures $60M for 3D imaging to assess and fix properties

By Ingrid Lunden

The US property market has proven to be more resilient than you might have assumed it would be in the midst of a coronavirus pandemic, and today a startup that’s built a computer vision tool to help owners assess and fix those properties more easily is announcing a significant round of funding as it sees a surge of growth in usage.

Hover — which has built a platform that uses eight basic smartphone photos to patch together a 3D image of your home that can then be used by contractors, insurance companies and others to assess a repair, price out the job, and then order the parts to do the work — has raised $60 million in new funding.

The Series D values the company at $490 million post-money, and significantly, it included a number of strategic investors. Three of the biggest insurance companies in the US — Travelers, State Farm Ventures, and Nationwide — led the round, with building materials giant Standard Industries, and other unnamed building tech firms, also participating. Past financial backers Menlo Ventures, GV (formerly Google Ventures), and Alsop Louie Partners as well as new backer Guidewire Software were also in this round.

This funding takes the total raised by Hover to just over $142 million, and for some context on its valuation, it’s a significant jump compared to its last round, a Series C in 2019, when Hover was valued at $280 million (according to PitchBook data).

Today’s funding, that valuation jump, and the interest from insurance firms comes on the heels of huge growth for the company. A.J. Altman, Hover’s founder and CEO, tells me that in 2016 the startup was making some $1 million in revenues. This year, it’s expecting to hit “north of $70 million” in its annual run rate, with insurance companies and other big business partners accounting for the majority of its growth.

Hover was founded in 2011 and it first made its name with homeowners and the sole-trader and small business contractors working on their homes repairing roofs and fixing other parts of their structures. Its unique contribution to the market was a piece of software that bypassed a lot of the fragmented and hardest work involved in doing home repair by tying the whole process to the functions of a smartphone: its camera, phone sensors, and the use of apps.

In essence, it allowed anyone with an ordinary smartphone camera to snap several pictures of a space (up to 8), which could then be used to piece together a “structured” 3D image to better assess a job.

Those 3D images are not ordinary 3D pictures: they are dynamically encoded with information about materials, sizes and dimensions and other data critical to carrying out any work. A contractor using the Hover app could set up a system where these pictures, in turn, could be used to automatically create priced out quotes, with bills of material and timings for work, for their prospective clients. (Compare that to the “back of the business card” pricing that typifies quite a lot of jobs, in Altman’s words.)

And these days, Hover also serves as an e-commerce portal for builders to order in the parts to carry out that work.

The company has had a lot of traction in the market in part because of how it’s digitized an analogue process that had before it been firmly offline and lacking in transparency, in what is essentially not just a fragmented process, but also a very fragmented marketplace, with some 100,000 home repair firms active in the US today.

“The home improvement segment was one of the few that was not online,” Altman said. “For example, if I needed a new roof, it’s not that easy to just tell me what that would cost. The reason is because someone has to pull dozens of measurements off a house before costing that out, estimating the time it would take to fix and so on. Hover built a pipeline that turns photos into all of those answers.” It currently has about 10,000 contractors using its app, Altman said, so there is still a lot of growth to go.

Altman said that in its early days, the company faced something of a hurdle convincing people of the usefulness of having an app that let even the homeowner take pictures of an issue on a property in order to start the process of finding someone to fix it.

That’s because even in an age where DIY is pretty commonplace — and The Home Depot, incidentally, is also a previous backer of Hover — many builders and their partners see that role as theirs, not their clients.

That has changed a lot, especially in the last year in the age of a global health pandemic that has driven many to reduce social contacts to help contain the spread of the virus.

“Eliminating the need for on-site home visits is a huge deal, but we were spending a lot of time convincing some before Covid that this was a good idea,” he said. “The provider — whether it involved an insurance carrier or contractor — didn’t like the idea of engaging a homeowner, asking them, to do that work.” That has shifted considerably, he said, “with the Covid experience,” with many now asking for this option.

While smaller contractors account for more of Hover’s revenues today, insurance is the faster-growing segment of its business, Altman said, where large firms are integrating their apps with Hover’s, sending out links to customers to snap pictures with the Hover app that then get automatically sent to the insurance company’s app to kickstart the process of working through customers’ claims.

“It’s important to us that we provide our customers with the best possible experience, and Hover’s technology helps us to do that by creating a simpler, faster and more transparent claims process,” said Nick Seminara, executive vice president and chief claims officer of Travelers, in a statement. “We see a tremendous opportunity for Hover in the insurance industry, and we’re pleased to continue our partnership and invest in their future.”

Longer term, there are a number of areas where you could imagine Hover’s technology to apply. The company is already doing a lot of work in commercial buildings, and the next step is likely going to be expanding to more interior work, including home design and decor.

Digital “twinning”, as one investor described the process of creating digitized visualizations of physical spaces that can then be used for more analytics, and simply to improve the process of doing any work involving that space, is used in a number of industries. They include mapping and logistics, automotive applications, medicine, aerospace and defense, gaming, and more. That gives a company like Hover, which has some 35 patents on its technology already secured and has a team building more innovations into the process, a potentially large horizon and set of options for how it might grow.

But even focusing on the potential within the property market has a lot of potential rooms to explore. For example, you could see tech like this linking up with the likes of home sales firms, where companies are able to not just market a home, but potentially fixer-uppers with all the planning work set out for how to fix it up when you buy it; and also of course the extensive landscape of e-commerce businesses selling home furnishings, electronics and more.

Many of these, like Ikea and Houzz, have already put in a lot of investment into leveraging newer tech like Apple’s AR platform to improve their user experience, and so the appetite to take things to the next level is definitely there.

Ride Vision raises $7M for its AI-based motorcycle safety system

By Frederic Lardinois

Ride Vision, an Israeli startup that is building an AI-driven safety system to prevent motorcycle collisions, today announced that it has raised a $7 million Series A round led by crowdsourcing platform OurCrowd. YL Ventures, which typically specializes in cybersecurity startups but also led the company’s $2.5 million seed round in 2018, Mobilion VC and motorcycle mirror manufacturer Metagal also participated in this round. The company has now raised a total of $10 million.

In addition to this new funding round, Ride Vision also today announced a new partnership with automotive parts manufacturer Continental .

“As motorcycle enthusiasts, we at Ride Vision are excited at the prospect of our international launch and our partnership with Continental,” Uri Lavi, CEO and co-founder of Ride Vision, said in today’s announcement. “This moment is a major milestone, as we stride toward our dream of empowering bikers to feel truly safe while they enjoy the ride.”

The general idea here is pretty straightforward and comparable with the blind-spot monitoring system in your car. Using computer vision, Ride Vision’s system, the Ride Vision 1, analyzes the traffic around a rider in real time. It provides forward collision alerts and monitors your blind spot, but it can also tell you when you’re following another rider or car too closely. It can also simply record your ride and, coming soon, it’ll be able to make emergency calls on your behalf when things go awry.

As the company argues, the number of motorcycles (and other motorized two-wheeled vehicles) has only increased during the pandemic, as people started avoiding public transport and looked for relatively affordable alternatives. In Europe, sales of two-wheeled vehicles increased by 30% during the pandemic.

The hardware on the motorcycle itself is pretty straightforward. It includes two wide-angle cameras at the front and rear, as well as alert indicators on the mirrors, as well as the main computing unit. Ride Vision has patents on its human-machine warning interface and vision algorithms.

It’s worth noting that there are some blind-spot monitoring solutions for motorcycles on the market already, including those from Innovv and Senzar. Honda also has patents on similar technologies. These do not provide the kind of 360-degree view that Ride Vision is aiming for.

Ride Vision says its products will be available in Italy, Germany, Austria, Spain, France, Greece, Israel and the U.K. in early 2021, with the U.S., Brazil, Canada, Australia, Japan, India, China and others following later.

Computer vision startup Chooch.ai scores $20M Series A

By Ron Miller

Chooch.ai, a startup that hopes to bring computer vision more broadly to companies to help them identify and tag elements at high speed, announced a $20 million Series A today.

Vickers Venture Partners led the round with participation from 212, Streamlined Ventures, Alumni Ventures Group, Waterman Ventures and several other unnamed investors. Today’s investment brings the total raised to $25.8 million, according to the company.

“Basically we set out to copy human visual intelligence in machines. That’s really what this whole journey is about,” CEO and co-founder Emrah Gultekin explained. As the company describes it, “Chooch Al can rapidly ingest and process visual data from any spectrum, generating AI models in hours that can detect objects, actions, processes, coordinates, states, and more.”

Chooch is trying to differentiate itself from other AI startups by taking a broader approach that could work in any setting, rather than concentrating on specific vertical applications. Using the pandemic as an example, Gultekin says you could use his company’s software to identify everyone who is not wearing a mask in the building or everyone who is not wearing a hard hat at a construction site.

With 22 employees spread across the U.S., India and Turkey, Chooch is building a diverse company just by virtue of its geography, but as it doubles the workforce in the coming year, it wants to continue to build on that.

“We’re immigrants. We’ve been through a lot of different things, and we recognize some of the issues and are very sensitive to them. One of our senior members is a person of color and we are very cognizant of the fact that we need to develop that part of our company,” he said. At a recent company meeting, he said that they were discussing how to build diversity into the policies and values of the company as they move forward.

The company currently has 18 enterprise clients and hopes to use the money to add engineers, data scientists and begin to build out a worldwide sales team to continue to build the product and expand its go-to-market effort.

Gultekin says that the company’s unusual name comes from a mix of the words choose and search. He says that it is also an old Italian insult. “It means dummy or idiot, which is what artificial intelligence is today. It’s a poor reflection of humanity or human intelligence in humans,” he said. His startup aims to change that.

Sales readiness platform MindTickle raises $100 million led by SoftBank Vision Fund 2

By Manish Singh

MindTickle, a startup that is helping hundreds of small and large firms improve their sales through its eponymous sales readiness platform, said on Monday it has raised $100 million in a new financing round.

The Pune and San Francisco-headquartered startup’s new financing round was led by SoftBank Vision Fund 2. The round is a combination of debt and equity, the startup said. Existing investors Norwest Venture Partners, Accel Partners, Canaan, NEA, NewView Capital, and Qualcomm Ventures also participated in the round, which according to a person familiar with the matter, valued the eight-year-old startup at roughly $500 million, up from about $250 million last year.

The vast majority of this $100 million fund is equity investment, said Krishna Depura, co-founder and chief executive of MindTickle, in an interview with TechCrunch. He declined to disclose the specific amount, however, or comment on the valuation.

We used to live in a seller’s world, where buyers had a small selection of choices from which they could pick their products. “You wanted to buy a car, there would be only one new car model every four years. Things have changed,” said Depura, noting that customers today have no shortage of companies trying to sell them similar lines of products.

While that’s great for customers, it means that companies have to put more effort to make a sale. A decade ago, as Depura watched Facebook and gaming firms like Zynga develop addictive products and services, he wondered if some of these learnings could be baked directly into modern age sales efforts.

That was the inception of MindTickle, which now helps companies guide their customer-facing teams. Regardless of what these firms are attempting to sell, they are competing with dozens of firms, if not more, and customers have ever-so-declining patience to hear them.

MindTickle, whose name is inspired from the idea of gamifying mindsets, allows companies to train and upskill their salespeople at scale, and uses role playing methods to help them practice their pitch, and how to handle a customer’s queries.

Depura said the platform helps salespeople measure their improvement in revenue metrics and offers feedback on the calls they made. The platform utilizes machine learning engines to serve personalized remediations and reinforcements to salespeople, he said.

More than 200 enterprises, including more than 40 of the Fortune 500 and Forbes Global 2000 firms, are among MindTickle’s clients today — though, citing confidential agreements, the firm said it can’t disclose several names. Some of the names it did share include MongoDB, Nutanix, Qualtrics, Procore, Square, Janssen, Cloudera, Dexcom, Merck & Co., and Benetton Group.

As of this writing, MindTickle was ranked the fifth best product for sales on G2, a popular marketplace for software and services.

“MindTickle’s track record of growth, quality of product and marquee customer base highlights their strengths,” said Sumer Juneja, Partner at SoftBank Investment Advisers, in a statement. “By delivering engaging and personalized training to users, MindTickle is uniquely placed to support businesses to increase revenue generation and extend critical capabilities within their existing workforce.” The Japanese investment group, which began conversations with MindTickle about three months ago, is exploring more investments in SaaS categories.

The new funding capital will allow MindTickle, which employs about 400 people in the U.S., Europe, and India, to further establish this new category, said Depura. The startup is developing new product features and will deploy the new funds to further grow in Europe, and the U.S., which is already one of its key markets.

More to follow…

Deep Science: Alzheimer’s screening, forest-mapping drones, machine learning in space, more

By Devin Coldewey

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.

This week, a startup that’s using UAV drones for mapping forests, a look at how machine learning can map social media networks and predict Alzheimer’s, improving computer vision for space-based sensors and other news regarding recent technological advances.

Predicting Alzheimer’s through speech patterns

Machine learning tools are being used to aid diagnosis in many ways, since they’re sensitive to patterns that humans find difficult to detect. IBM researchers have potentially found such patterns in speech that are predictive of the speaker developing Alzheimer’s disease.

The system only needs a couple minutes of ordinary speech in a clinical setting. The team used a large set of data (the Framingham Heart Study) going back to 1948, allowing patterns of speech to be identified in people who would later develop Alzheimer’s. The accuracy rate is about 71% or 0.74 area under the curve for those of you more statistically informed. That’s far from a sure thing, but current basic tests are barely better than a coin flip in predicting the disease this far ahead of time.

This is very important because the earlier Alzheimer’s can be detected, the better it can be managed. There’s no cure, but there are promising treatments and practices that can delay or mitigate the worst symptoms. A non-invasive, quick test of well people like this one could be a powerful new screening tool and is also, of course, an excellent demonstration of the usefulness of this field of tech.

(Don’t read the paper expecting to find exact symptoms or anything like that — the array of speech features aren’t really the kind of thing you can look out for in everyday life.)

So-cell networks

Making sure your deep learning network generalizes to data outside its training environment is a key part of any serious ML research. But few attempt to set a model loose on data that’s completely foreign to it. Perhaps they should!

Researchers from Uppsala University in Sweden took a model used to identify groups and connections in social media, and applied it (not unmodified, of course) to tissue scans. The tissue had been treated so that the resultant images produced thousands of tiny dots representing mRNA.

Normally the different groups of cells, representing types and areas of tissue, would need to be manually identified and labeled. But the graph neural network, created to identify social groups based on similarities like common interests in a virtual space, proved it could perform a similar task on cells. (See the image at top.)

“We’re using the latest AI methods — specifically, graph neural networks, developed to analyze social networks — and adapting them to understand biological patterns and successive variation in tissue samples. The cells are comparable to social groupings that can be defined according to the activities they share in their social networks,” said Uppsala’s Carolina Wählby.

It’s an interesting illustration not just of the flexibility of neural networks, but of how structures and architectures repeat at all scales and in all contexts. As without, so within, if you will.

Drones in nature

The vast forests of our national parks and timber farms have countless trees, but you can’t put “countless” on the paperwork. Someone has to make an actual estimate of how well various regions are growing, the density and types of trees, the range of disease or wildfire, and so on. This process is only partly automated, as aerial photography and scans only reveal so much, while on-the-ground observation is detailed but extremely slow and limited.

Treeswift aims to take a middle path by equipping drones with the sensors they need to both navigate and accurately measure the forest. By flying through much faster than a walking person, they can count trees, watch for problems and generally collect a ton of useful data. The company is still very early-stage, having spun out of the University of Pennsylvania and acquired an SBIR grant from the NSF.

“Companies are looking more and more to forest resources to combat climate change but you don’t have a supply of people who are growing to meet that need,” Steven Chen, co-founder and CEO of Treeswift and a doctoral student in Computer and Information Science (CIS) at Penn Engineering said in a Penn news story. “I want to help make each forester do what they do with greater efficiency. These robots will not replace human jobs. Instead, they’re providing new tools to the people who have the insight and the passion to manage our forests.”

Another area where drones are making lots of interesting moves is underwater. Oceangoing autonomous submersibles are helping map the sea floor, track ice shelves and follow whales. But they all have a bit of an Achilles’ heel in that they need to periodically be picked up, charged and their data retrieved.

Purdue engineering professor Nina Mahmoudian has created a docking system by which submersibles can easily and automatically connect for power and data exchange.

A yellow marine robot (left, underwater) finds its way to a mobile docking station to recharge and upload data before continuing a task. (Purdue University photo/Jared Pike)

The craft needs a special nosecone, which can find and plug into a station that establishes a safe connection. The station can be an autonomous watercraft itself, or a permanent feature somewhere — what matters is that the smaller craft can make a pit stop to recharge and debrief before moving on. If it’s lost (a real danger at sea), its data won’t be lost with it.

You can see the setup in action below:

https://youtu.be/kS0-qc_r0

Sound in theory

Drones may soon become fixtures of city life as well, though we’re probably some ways from the automated private helicopters some seem to think are just around the corner. But living under a drone highway means constant noise — so people are always looking for ways to reduce turbulence and resultant sound from wings and propellers.

Computer model of a plane with simulated turbulence around it.

It looks like it’s on fire, but that’s turbulence.

Researchers at the King Abdullah University of Science and Technology found a new, more efficient way to simulate the airflow in these situations; fluid dynamics is essentially as complex as you make it, so the trick is to apply your computing power to the right parts of the problem. They were able to render only flow near the surface of the theoretical aircraft in high resolution, finding past a certain distance there was little point knowing exactly what was happening. Improvements to models of reality don’t always need to be better in every way — after all, the results are what matter.

Machine learning in space

Computer vision algorithms have come a long way, and as their efficiency improves they are beginning to be deployed at the edge rather than at data centers. In fact it’s become fairly common for camera-bearing objects like phones and IoT devices to do some local ML work on the image. But in space it’s another story.

Image Credits: Cosine

Performing ML work in space was until fairly recently simply too expensive power-wise to even consider. That’s power that could be used to capture another image, transmit the data to the surface, etc. HyperScout 2 is exploring the possibility of ML work in space, and its satellite has begun applying computer vision techniques immediately to the images it collects before sending them down. (“Here’s a cloud — here’s Portugal — here’s a volcano…”)

For now there’s little practical benefit, but object detection can be combined with other functions easily to create new use cases, from saving power when no objects of interest are present, to passing metadata to other tools that may work better if informed.

In with the old, out with the new

Machine learning models are great at making educated guesses, and in disciplines where there’s a large backlog of unsorted or poorly documented data, it can be very useful to let an AI make a first pass so that graduate students can use their time more productively. The Library of Congress is doing it with old newspapers, and now Carnegie Mellon University’s libraries are getting into the spirit.

CMU’s million-item photo archive is in the process of being digitized, but to make it useful to historians and curious browsers it needs to be organized and tagged — so computer vision algorithms are being put to work grouping similar images, identifying objects and locations, and doing other valuable basic cataloguing tasks.

“Even a partly successful project would greatly improve the collection metadata, and could provide a possible solution for metadata generation if the archives were ever funded to digitize the entire collection,” said CMU’s Matt Lincoln.

A very different project, yet one that seems somehow connected, is this work by a student at the Escola Politécnica da Universidade de Pernambuco in Brazil, who had the bright idea to try sprucing up some old maps with machine learning.

The tool they used takes old line-drawing maps and attempts to create a sort of satellite image based on them using a Generative Adversarial Network; GANs essentially attempt to trick themselves into creating content they can’t tell apart from the real thing.

Image Credits: Escola Politécnica da Universidade de Pernambuco

Well, the results aren’t what you might call completely convincing, but it’s still promising. Such maps are rarely accurate but that doesn’t mean they’re completely abstract — recreating them in the context of modern mapping techniques is a fun idea that might help these locations seem less distant.

mmhmm, Phil Libin’s new startup, acquires Memix to add enhanced filters to its video presentation toolkit

By Ingrid Lunden

Virtual meetings are a fundamental part of how we interact with each other these days, but even when (if!?) we find better ways to mitigate the effects of Covid-19, many think that they will be here to stay. That means there is an opportunity out there to improve how they work — because let’s face it, Zoom Fatigue is real and I for one am not super excited anymore to be a part of your Team.

mmhmm, the video presentation startup from former Evernote CEO Phil Libin with ambitions to change the conversation (literally and figuratively) about what we can do with the medium — its first efforts have included things like the ability to manipulate presentation material around your video in real time to mimic newscasts — is today announcing an acquisition as it continues to hone in on a wider launch of its product, currently in a closed beta.

It has acquired Memix, an outfit out of San Francisco that has built a series of filters you can apply to videos — either pre-recorded or streaming — to change the lighting, details in the background, or across the whole of the screen, and an app that works across various video platforms to apply those filters.

Like mmhmm, Memix is today focused on building tools that you use on existing video platforms — not building a video player itself. Memix today comes in the form of a virtual camera, accessible via Windows apps for Zoom, WebEx and Microsoft Teams; or web apps like Facebook Messenger, Houseparty and others that run on Chrome, Edge and Firefox.

Libin said in an interview that the plan will be to keep that virtual camera operating as is while it works on integrating the filters and Memix’s technology into mmhmm, while also laying the groundwork for building more on top of the platform.

Libin’s view is that while there are already a lot of video products and users in the market today, we are just at the start of it all, with technology and our expectations changing rapidly. We are shifting, he said, from wanting to reproduce existing experiences (like meetings) to creating completely new ones that might actually be better.

“There is a profound change in the world that we are just at the beginning of,” he said in an interview. “The main thing is that everything is hybrid. If you imagine all the experiences we can have, from in person to online, or recorded to live, up to now almost everything in life fit neatly into one of those quadrants. The boundaries were fixed. Now all these boundaries have melted away we can rebuild every experience to be natively hybrid. This is a monumental change.”

That is a concept that the Memix founders have not just been thinking about, but also building the software to make it a reality.

“There is a lot to do,” said Pol Jeremias-Vila, one of the co-founders. “One of our ideas was to try to provide people who do streaming professionally an alternative to the really complicated set-ups you currently use,” which can involve expensive cameras, lights, microphones, stands and more. “Can we bring that to a user just with a couple of clicks? What can be done to put the same kind of tech you get with all that hardware into the hands of a massive audience?”

Memix’s team of two — co-founders Inigo Quilez and Jeremias-Vila, Spaniards who met not in Spain but the Bay Area — are not coming on board full-time, but they will be helping with the transition and integration of the tech.

Libin said that he first became aware of Quilez from a YouTube video he’d posted on “The principles of painting with maths”, but that doesn’t give a lot away about the two co-founders. They are in reality graphic engineering whizzes, with Jeremias-Vila currently the lead graphics software engineer at Pixar, and Quilez until last year a product manager and lead engineer at Facebook, where he created, among other things, the Quill VR animation and production tool for Oculus.

Because working the kind of hours that people put in at tech companies wasn’t quite enough time to work on graphics applications, the pair started another effort called Beauty Pi (not to be confused with Beauty Pie), which has become a home for various collaborations between the two that had nothing to do with their day jobs. Memix had been bootstrapped by the pair as a project built out of that. And other efforts have included Shadertoy, a community and platform for creating Shaders (a computer program created to shade in 3D scenes).

That background of Memix points to an interesting opportunity in the world of video right now. In part because of all the focus (sorry not sorry!) on video right now as a medium because of our current pandemic circumstances, but also because of the advances in broadband, devices, apps and video technology, we’re seeing a huge proliferation of startups building interesting variations and improvements on the basic concept of video streaming.

Just in the area of videoconferencing alone, some of the hopefuls have included Headroom, which launched the other week with a really interesting AI-based approach to helping its users get more meaningful notes from meetings, and using computer vision to help presenters “read the room” better by detecting if people are getting bored, annoyed and more.

Vowel is also bringing a new set of tools not just to annotate meetings and their corresponding transcriptions in a better way, but to then be able to search across all your sessions to follow up items and dig into what people said over multiple events.

And Descript, which originally built a tool to edit audio tracks, earlier this week launched a video component, letting users edit visuals and what you say in those moving pictures, by cutting, pasting and rewriting a word-based document transcribing the sound from that video. All of these have obvious B2B angles, like mmhmm, and they are just the tip of the iceberg.

Indeed, the huge amount of IP out there is interesting in itself. Yet the jury is still out on where all of it would best live and thrive as the space continues to evolve, with more defined business models (and leading companies) only now emerging.

That presents an interesting opportunity not just for the biggies like Zoom, Google and Microsoft, but also players who are building entirely new platfroms from the ground up.

mmhmm is a notable company in that context. Not only does it have the reputation and inspiration of Libin behind it — a force powerful enough that even his foray into the ill-fated world of chatbots got headlines — but it’s also backed by the likes of Sequoia, which led a $21 million round earlier this month.

Libin said he doesn’t like to think of his startup as a consolidator, or the industry in a consolidation play, as that implies a degree of maturity in an area that he still feels is just getting started.

“We’re looking at this not so much consolidation, which to me means marketshare,” he said. “Our main criteria is that we wanted to work with teams that we are in love with.”

Acapela, from the founder of Dubsmash, hopes ‘asynchronous meetings’ can end Zoom fatigue

By Steve O'Hear

Acapela, a new startup co-founded by Dubsmash founder Roland Grenke, is breaking cover today in a bid to re-imagine online meetings for remote teams.

Hoping to put an end to video meeting fatigue, the product is described as an “asynchronous meeting platform,” which Grenke and Acapela’s other co-founder, ex-Googler Heiki Riesenkampf (who has a deep learning computer science background), believe could be the key to unlock better and more efficient collaboration. In some ways the product can be thought of as the antithesis to Zoom and Slack’s real-time and attention-hogging downsides.

To launch, the Berlin-based and “remote friendly” company has raised €2.5 million in funding. The round is led by Visionaries Club with participation from various angel investors, including Christian Reber (founder of Pitch and Wunderlist) and Taavet Hinrikus (founder of TransferWise). I also understand Entrepreneur First is a backer and has assigned EF venture partner Benedict Evans to work on the problem. If you’ve seen the ex-Andreessen Horowitz analyst writing about a post-Zoom world lately, now you know why.

Specifically, Acapela says it will use the injection of cash to expand the core team, focusing on product, design and engineering as it continues to build out its offering.

“Our mission is to make remote teams work together more effectively by having fewer but better meetings,” Grenke tells me. “With Acapela, we aim to define a new category of team collaboration that provides more structure and personality than written messages (Slack or email) and more flexibility than video conferencing (Zoom or Google Meet)”.

Grenke believes some form of asynchronous meetings is the answer, where participants don’t have to interact in real-time but the meeting still has an agenda, goals, a deadline and — if successfully run — actionable outcomes.

“Instead of sitting through hours of video calls on a daily basis, users can connect their calendars and select meetings they would like to discuss asynchronously,” he says. “So, as an alternative to everyone being in the same call at the same time, team members contribute to conversations more flexibly over time. Like communication apps in the consumer space, Acapela allows rich media formats to be used to express your opinion with voice or video messages while integrating deeply with existing productivity tools (like GSuite, Atlassian, Asana, Trello, Notion, etc.)”.

In addition, Acapela will utilise what Grenke says is the latest machine learning techniques to help automate repetitive meeting tasks as well as to summarise the contents of a meeting and any decisions taken. If made to work, that in itself could be significant.

“Initially, we are targeting high-growth tech companies which have a high willingness to try out new tools while having an increasing need for better processes as their teams grow,” adds the Acapela founder. “In addition to that, they tend to have a technical global workforce across multiple time zones which makes synchronous communication much more costly. In the long run we see a great potential tapping into the space of SMEs and larger enterprises, since COVID has been a significant driver of the decentralization of work also in the more traditional industrial sectors. Those companies make up more than 90% of our European market and many of them have not switched to new communication tools yet”.

SoftBank’s $100 million diversity and inclusion fund makes its first bet … in health

By Jonathan Shieber

SoftBank’s Opportunity Growth Fund has made the health insurance startup Vitable Health the first commitment from its $100 million fund dedicated to investing in startups founded by entrepreneurs of color.

The Philadelphia-based company, which recently launched from Y Combinator, is focused on bringing basic health insurance to underserved and low-income communities.

Founded by Joseph Kitonga, a 23 year-old entrepreneur whose parents immigrated to the U.S. a decade ago, Vitable provides affordable acute healthcare coverage to underinsured or un-insured populations and was born out of Kitonga’s experience watching employees of his parents’ home healthcare agency struggle to receive basic coverage.

The $1.5 million commitment was led by the SoftBank Group Corp Opportunity Fund, and included Y Combinator, DNA Capital, Commerce Ventures, MSA Capital, Coughdrop Capital, and angels like Immad Akhund, the chief executive of Mercury Bank; and Allison Pickens, the former chief operating officer of Gainsight, the company said in a blog post.

“Good healthcare is a basic right that every American deserves, whoever they are,” said Paul Judge, the Atlanta-based Early Stage Investing Lead for the fund and the founder of Atlanta’s TechSquare Labs investment fund. “We’ve been inspired by Joseph and his approach to addressing this challenge. Vitable Health is bridging critical gaps in patient care and has emerged as a necessary, essential service for all whether they’re uninsured, underinsured, or simply need a better plan for their lifestyle.”

SoftBank created the opportunity fund while cities around the U.S. were witnessing a wave of public protests against systemic racism and police brutality stemming from the murder of the Black Minneapolis citizen George Floyd at the hands of white police officers.  Floyd’s murder reignited simmering tensions between citizens and police in cities around the country over issues including police brutality, the militarization of civil authorities, and racial profiling.

SoftBank has had its own problems with racism in its portfolio this year. A few months before the firm launched its fund, the CEO and founder of one of its portfolio companies, Banjo, resigned after it was revealed that he once had ties to the KKK.

With the Opportunity Fund, SoftBank is trying to address some of its issues, and notably, will not take a traditional management fee for transactions out of the fund “but instead will seek to put as much capital as possible into the hands of founders and entrepreneurs of color.”

The Opportunity Fund is the third investment vehicle announced by SoftBank in the last several years. The biggest of them all is the $100 billion Vision Fund; then last year it announced the $2 billion Innovation Fund focused on Latin America.

Venn, a network hoping to be gaming’s answer to MTV, raises $26 million

By Jonathan Shieber

VENN , the streaming network hoping to be gaming culture’s answer to MTV, has raised $26 million to bring its mix of video game-themed entertainment and streaming celebrity features to the masses.

The financing came from previous investor Bitkraft, one of the largest funds focused on the intersection of gaming and synthetic reality, and new investor Nexstar Media Group, a publicly traded operator of regional television broadcast stations and cable networks around the U.S.

The investment from Nexstar gives Venn a toehold in local broadcast that could see the network’s shows appear on regular broadcast televisions in most major American cities, and adds to a roster of Nexstar properties including CourtTV, Bounce, and Ion Television. The company has over 197 television stations and a network of websites that average over 100 million monthly active users and 1 billion page views, according to a statement from Ben Kusin, Venn’s co-founder and chief executive.

“VENN is a new kind of TV network built for the streaming and digital generation, and it’s developing leading-edge content for the millennial and Gen Z cultures who are obsessed with gaming,” Nexstar Media Group President, Chief Operating Officer and Chief Financial Officer, Thomas E. Carter said in a statement. “Gaming and esports are two fast growing sectors and through our investment we plan to distribute VENN content across our broadcast platform to address a younger audience; utilize VENN to gain early access to gaming-adjacent content; and present local and national brands with broadcast and digital marketing and advertising opportunities to reach younger audiences.”

It’s unclear how much traction with younger audiences Venn has. The company’s YouTube channel has 14,000 subscribers and its Twitch Channel boasts a slightly more impressive 57.7 thousand subscribers. Still, it’s early days for the streaming network, which only began airing its first programming in September.

Since its launch a little over a year ago, Venn has managed to poach some former senior leadership from Viacom’s MTV and MTV Music Entertainment Group, which has been the model the gaming-focused streaming network has set for itself. Jeff Jacobs, the former senior vice president for production planning, strategies and operations at MTV’s parent company, Viacom and most recently an independent producer for Viacom, the NBA, Global Citizen and ACE Universe.

Venn is currently available on its own website and various streaming services as well as through partnerships with the Roku Channel, Plex, Xumo, Samsung TV Plus and Vizio.

The company has also managed to pick up some early brand partnerships with companies including Subway, Draft Kings, Alienware, Adidas and American Eagle.

 

Intel is providing the smarts for the first satellite with local AI processing on board

By Darrell Etherington

Intel detailed today its contribution to PhiSat-1, a new tiny small satellite that was launched into sun-synchronous orbit on September 2. PhiSat-1 has a new kind of hyperspectral-thermal camera on board, and also includes a Movidius Myriad 2 Vision Processing Unit. That VPU is found in a number of consumer devices on Earth, but this is its first trip to space – and the first time it’ll be handling large amounts of local data, saving researchers back on Earth precious time and satellite downlink bandwidth.

Specifically, the AI on board the PhiSat-1 will be handling automatic identification of cloud cover – images where the Earth is obscured in terms of what the scientists studying the data actually want to see. Getting rid of these images before they’re even transmitted means that the satellite can actually realize a bandwidth savings of up to 30%, which means more useful data is transmitted to Earth when it is in range of ground stations for transmission.

The AI software that runs on the Intel Myriad 2 on PhiSat-1 was created by startup Ubotica, which worked with the hardware maker behind the hyperspectral camera. It also had to be tuned to compensate for the excess exposure to radiation, though a bit surprisingly testing at CERN found that the hardware itself didn’t have to be modified in order to perform within the standards required for its mission.

Computing at the edge takes on a whole new meaning when applied to satellites on orbit, but it’s definitely a place where local AI makes a ton of sense. All the same reasons that companies seek to handle data processing and analytics at the site of sensors hear on Earth also apply in space – but magnified exponentially in terms of things like network inaccessibility and quality of connections, so expect to see a lot more of this.

PhiSat-1 was launched in September as part of Arianspace’s first rideshare demonstration mission, which it aims to use to show off its ability to offer launch services to smaller startups for smaller payloads at lower costs.

Tiliter bags $7.5M for its ‘plug and play’ cashierless checkout tech

By Natasha Lomas

Tiliter, an Australian startup that’s using computer vision to power cashierless checkout tech that replaces the need for barcodes on products, has closed a $7.5 million Series A round of funding led by Investec Emerging Companies.

The 2017-founded company is using AI for retail product recognition — claiming advantages such as removing the need for retail staff to manually identify loose items that don’t have a barcode (e.g. fresh fruit or baked goods), as well as reductions in packaging waste.

It also argues the AI-based product recognition system reduces incorrect product selections (either intentional or accidental).

“Some objects simply don’t have barcodes which causes a slow and poor experience of manual identification,” says co-founder and CEO Martin Karafilis. “This is items like bulk items, fresh produce, bakery pieces, mix and match etc. Sometimes barcodes are not visible or can be damaged.

“Most importantly there is an enormous amount of plastic created in the world for barcodes and identification packaging. With this technology we are able to dramatically decrease and, in some cases, eliminate single use plastic for retailers.”

Currently the team is focused on the supermarket vertical — and claims over 99% accuracy in under one second for its product identification system.

It’s developed hardware that can be added to existing checkouts to run the computer vision system — with the aim of offering retailers a “plug and play” cashierless solution.

Marketing text on its website adds of its AI software: “We use our own data and don’t collect any in-store. It works with bags, and can tell even the hardest sub-categories apart such as Truss, Roma, and Gourmet tomatoes or Red Delicious, Royal Gala and Pink Lady apples. It can also differentiate between organic and non-organic produce [by detecting certain identification indicators that retailers may use for organic items].”

“We use our pre-trained software,” says Karafilis when asked whether there’s a need for a training period to adapt the system to a retailer’s inventory. “We have focused on creating a versatile and scalable software solution that works for all retailers out of the box. In the instance an item isn’t in the software it can be collected by the supermarket in approx 20min and has self-learning capabilities.”

As well as a claim of easy installation, given the hardware can bolt onto existing retail IT, Tiliter touts lower cost than “currently offered autonomous store solutions”. (Amazon is one notable competitor on that front.)

It sells the hardware outright, charging a yearly subscription fee for the software (this includes a pledge of 24/7 global service and support).

“We provide proprietary hardware (camera and processor) that can be retrofitted to any existing checkout, scale or point of sale system at a low cost integrating our vision software with the point of sale,” says Karafilis, adding that the pandemic is driving demand for easy to implement cashierless tech.

The startup cites a 300% increase in ‘scan and go’ adoption in the US over the past year due to COVID-19, as an example, adding that further global growth is expected.

It’s not breaking out customer numbers at this stage — but early adopters for its AI-powered product recognition system include Woolworths in Australia with over 20 live stores; Countdown in New Zealand, and several retail chains in the US such as New York City’s Westside Market.

The Series A funding will go on accelerating expansion across Europe and the US — with “many” supermarkets set to be adopt its tech over the coming months.

Pimloc gets $1.8M for its AI-based visual search and redaction tool

By Natasha Lomas

U.K.-based Pimloc has closed a £1.4 million (~$1.8 million) seed funding round led by Amadeus Capital Partners. Existing investor Speedinvest and other unnamed shareholders also participated in the round.

The 2016-founded computer vision startup launched a AI-powered photo classifier service called Pholio in 2017 — pitching the service as a way for smartphone users to reclaim agency over their digital memories without having to hand over their data to cloud giants like Google.

It has since pivoted to position Pholio as a “specialist search and discovery platform” for large image and video collections and live streams (such as those owned by art galleries or broadcasters) — and also launched a second tool powered by its deep learning platform. This product, Secure Redact, offers privacy-focused content moderation tools — enabling its users to find and redact personal data in visual content.

An example use case it gives is for law enforcement to anonymize bodycam footage so it can be repurposed for training videos or prepared for submitting as evidence.

Pimloc has been working with diverse image and video content for several years supporting businesses with a host of classification, moderation and data protection challenges (image libraries, art galleries, broadcasters and CCTV providers),” CEO Simon Randall tells TechCrunch.

“Through our work on the visual privacy side we identified a critical gap in the market for services that allow businesses and governments to manage visual data protection at scale on security footage. Pimloc has worked in this area for a couple of years building capability and product; as a result, Pimloc has now focused the business solely around this mission.”

Secure Redact has two components: A first (automated) step that detects personal data (e.g. faces, heads, bodies) within video content. On top of that is what Randall calls a layer of “intelligent tools” — letting users quickly review and edit results.

“All detections and tracks are auditable and editable by users prior to accepting and redacting,” he explains, adding: “Personal data extends wider than just faces into other objects and scene content, including ID cards, tattoos, phone screens (body-worn cameras have a habit of picking up messages on the wearer’s phone screen as they are typing, or sensitive notes on their laptop or notebook).”

One specific user of redaction with the tool he mentions is the University of Bristol. There, a research group, led by Dr Dima Damen, an associate professor in computer vision, is participating in an international consortium of 12 universities which is aiming to amass the largest data set on egocentric vision — and needs to be able to anonymise the video data set before making it available for academic/open source use.

On the legal side, Randall says Pimloc offers a range of data processing models — thereby catering to differences in how/where data can be processed. “Some customers are happy for Pimloc to act as data processor and use the Secure Redact SaaS solution — they manage their account, they upload footage and can review/edit/update detections prior to redaction and usage. Some customers run the Secure Redact system on their servers where they are both data controller and processor,” he notes.

“We have over 100 users signed up for the SaaS service covering mobility, entertainment, insurance, health and security. We are also in the process of setting up a host of on-premise implementations,” he adds.

Asked which sectors Pimloc sees driving the most growth for its platform in the coming years, he lists the following: smart cities/mobility platforms (with safety/analytics demand coming from the likes of councils, retailers, AVs); the insurance industry, which he notes is “capturing and using an increasing amount of visual data for claims and risk monitoring” and thus “looking at responsible systems for data management and processing”; video/telehealth, with traditional consultations moving into video and driving demand for visual diagnosis; and law enforcement, where security goals need to be supported by “visual privacy designed in by default” (at least where forces are subject to European data protection law).

On the competitive front, he notes that startups are increasingly focusing on specialist application areas for AI — arguing they have an opportunity to build compelling end-to-end propositions which are harder for larger tech companies to focus on.

For Pimlock specifically he argues it has an edge in its particular security-focused niche — given “deep expertise” and specific domain experience.

“There are low barriers to entry to create a low-quality product but very high technical barriers to create a service that is good enough to use at scale with real ‘in the wild’ footage,” he argues, adding: The generalist services of the larger tech players do not match up with domain specific provisions of Pimloc/Secure Redact. Video security footage is a difficult domain for AI, systems trained on lifestyle/celebrity or other general data sets perform poorly on real security footage.”

Commenting on the seed funding in a statement, Alex van Someren, MD of Amadeus Capital Partners, said: “There is a critical need for privacy by design and large-scale solutions, as video grows as a data source for mobility, insurance, commerce and smart cities, while our reliance on video for remote working increases. We are very excited about the potential of Pimloc’s products to meet this challenge.”

“Consumers around the world are rightfully concerned with how enterprises are handling the growing volume of visual data being captured 24/7. We believe Pimloc has developed an industry leading approach to visual security and privacy that will allow businesses and governments to manage the usage of visual data whilst protecting consumers. We are excited to support their vision as they expand into the wider Enterprise and SaaS markets,” added Rick Hao, principal at Speedinvest, in another supporting statement.

This Sony OLED Is the Best Prime Day TV Deal (2020)

By Parker Hall
If you're a discerning cinephile, this is a great price on the Sony A8H OLED TV, one of the prettiest screens of the year.

Plenty has raised over $500 million to grow fruits and veggies indoors

By Jonathan Shieber

Plenty‌ ‌Unlimited‌ has raised $140 million in new funding to build more vertical farms around the U.S.

The new funding, which brings the company’s total cash haul to an abundant $500 million, was led by existing investor SoftBank Vision Fund and included the berry farming giant Driscoll’s. It’s a move that will give Driscoll’s exposure to Plenty’s technology for growing and harvesting fruits and vegetables indoors.

The funding comes as Plenty has inked agreements with both its new berry-interested investor and the Albertsons grocery chain. The company also announced plans to build a new farm in Compton, California.

The financing provides plenty of cash for a company that’s seeing a cornucopia of competition in the tech-enabled cultivated crop market raising a plethora of private and public capital.

In the past month, AppHarvest has agreed to be taken public by a special purpose acquisition company in a deal that would value that greenhouse tomato-grower at a little under $1 billion. And another leafy green grower, Revol Greens, has raised $68 million for its own greenhouse-based bid to be part of the new green revolution.

Meanwhile, Plenty’s more direct competitor, Bowery Farming, is expanding its retail footprint to 650 stores, even as Plenty touts its deal with Albertsons to provide greens to 431 stores in California.

Discoll’s seemed convinced by Plenty’s technology, although the terms of the agreement with the company weren’t disclosed.

“We looked at other vertical farms, and Plenty’s technology was one of the most compelling systems we’d seen for growing berries,” said J. Miles Reiter, Driscoll’s chairman and CEO, in a statement. “We got to know Plenty while working on a joint development agreement to grow strawberries. We were so impressed with their technology, we decided to invest.”

Getaround raises a $140 million Series E amid rebound in short-distance travel

By Natasha Mascarenhas

Amid a rebound in short-distance travel, Getaround, a Silicon Valley car rental startup, has raised some new money to meet demand. The startup, which allows customers to instantly rent cars near them in over 100 cities, announced today that it has raised $140 million in a Series E deal, bringing its total known venture funding to $600 million.

The Series E deal was led by PeopleFund with new investors including Reid Hoffman’s and Mark Pincus’ Reinvent Capital, AmRest founder Henry McGovern, Pennant Investors, VectoIQ partners Steve Girsky, Mary Chan, and Julia Steyn also deploying capital. Participating prior investors include SoftBank Vision Fund, Menlo Ventures, and more.

The money comes after the car-sharing service faced its own set of hurdles before and during the coronavirus pandemic. In January, the startup reportedly laid off 150 employees, reducing field operations and the size of numerous global teams. In March, bookings dropped 75%, according to CEO Sam Zaid. Getaround laid off 100 employees. Zaid pointed to struggles within SoftBank, which did a $300 million Series D round in the company in mid-2018, as part of the reason.

Now, Zaid says that “Softbank has been an extremely supportive partner to Getaround at every critical stage of our journey this year including in January and through COVID,” in a statement to TechCrunch. The investor, noted above, participated in the latest financing.

The pandemic seems to have gone from a pain point to an opportunity for growth at Getaround. After the March layoffs, Getaround saw demand for its service come back in May: people didn’t want to fly because of the risk of catching COVID-19, but they didn’t mind driving. Getaround focused on contactless access to passenger cars and improving the platform. As short-distance travel to local joints became a more attainable option for those seeking a way to travel, Getaround found green shoots. By July 1, Getaround rehired all of its furloughed employees, according to Zaid.

Zaid estimates that Getaround has seen worldwide revenue more than double from its pre-COVID baseline and says gross margins have continued to improve. The financing, which was raised in the summer, will be used to help the business invest in car technology, bring on new partners, and reach global profitability.

Getaround, per Zaid, currently has over 6 million users globally.

Getaround isn’t alone in benefitting from consumers’ new travel tastebuds. Airbnb, which cut 1,900 jobs or 25% of its entire global workforce, is finding hope in focusing on local rentals. In June, according to the WSJ, Airbnb entirely redesigned its website and algorithm to show travelers where they could rent in their neighborhoods. The travel company is rumored to be going public in November.

Along with the financing, Getaround announced four new executives: Head of North American business Dan Kim, who formerly worked as the head of Airbnb plus and head of global sales and delivery at Tesla; CFO Laura Onopchenko, who is the former CFO of NerdWallet; vice president of people and culture Tia Gordon, formerly the director of people operations at Google; and vice president of customer experience Ruth Yankoupe, former vice president of Customer Experience at OYO.

❌