FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — December 1st 2020Your RSS feeds

Ivanti has acquired security firms MobileIron and Pulse Secure

By Zack Whittaker

IT security software company Ivanti has acquired two security companies: enterprise mobile security firm MobileIron, and corporate virtual network provider Pulse Secure.

In a statement on Tuesday, Ivanti said it bought MobileIron for $872 million in stock, with 91% of the shareholders voting in favor of the deal; and acquired Pulse Secure from its parent company Siris Capital Group, but did not disclose the buying price.

The deals have now closed.

Ivanti was founded in 2017 after Clearlake Capital, which owned Heat Software, bought Landesk from private equity firm Thoma Bravo, and merged the two companies to form Ivanti. The combined company, headquartered in Salt Lake City, focuses largely on enterprise IT security, including endpoint, asset, and supply chain management. Since its founding, Ivanti went on to acquire several other companies, including U.K.-based Concorde Solutions and RES Software.

If MobileIron and Pulse Secure seem familiar, both companies have faced their fair share of headlines this year after hackers began exploiting vulnerabilities found in their technologies.

Just last month, the U.K. government’s National Cyber Security Center published an alert that warned of a remotely executable bug in MobileIron, patched in June, allowing hackers to break into enterprise networks. U.S. Homeland Security’s cybersecurity advisory unit CISA said that the bug was being actively used by advanced persistent threat (APT) groups, typically associated with state-backed hackers.

Meanwhile, CISA also warned that Pulse Secure was one of several corporate VPN providers with vulnerabilities that have since become a favorite among hackers, particularly ransomware actors, who abuse the bugs to gain access to a network and deploy the file-encrypting ransomware.

Bottom-up SaaS: A framework for mapping pricing to customer value

By Walter Thompson
Caryn Marooney Contributor
Caryn Marooney is general partner at Coatue Management and sits on the boards of Zendesk and Elastic. In prior roles she oversaw communications for Facebook, Instagram, WhatsApp and Oculus and co-founded The OutCast Agency, which served clients like Salesforce.com and Amazon.
David Cahn Contributor
David Cahn is an investor at Coatue, where he focuses on software investments. David is passionate about open-source and infrastructure software and previously worked in the Technology Investment Banking Group at Morgan Stanley.

A few years ago, building a bottom-up SaaS company – defined as a firm where the average purchasing decision is made without ever speaking to a salesperson – was a novel concept. Today, by our count, at least 30% of the Cloud 100 are now bottom-up.

For the first time, individual employees are influencing the tooling decisions of their companies versus having these decisions mandated by senior executives. Self-serve businesses thrive on this momentum, leveraging individuals as their evangelists, to grow from a single use-case to small teams, and ultimately into whole company deployments.

In a truly self-service model, individual users can sign up and try the product on their own. There is no need to get compliance approval for sensitive data or to get IT support for integrations — everything can be managed by the line-level users themselves. Then that person becomes an internal champion, driving adoption across the organization.

Today, some of the most well-known software companies such as Datadog, MongoDB, Slack and Zoom, to name a few, are built with a primarily bottom-up product-led sales approach.

In this piece, we will take a closer look at this trend — and specifically how it has fundamentally altered pricing — and at a framework for mapping pricing to customer value.

Aligning value with pricing

In a bottom-up SaaS world, pricing has to be transparent and standardized (at least for the most part, see below). It’s the only way your product can sell itself. In practice, this means you can no longer experiment as you go, with salespeople using their gut instinct to price each deal. You need a concrete strategy that aligns customer value with pricing.

To do this well, you need to deeply understand your customers and how they use your product. Once you do, you can “MAP” them to help align pricing with value.

The MAP customer value framework

The MAP customer value framework requires deeply understanding your customers in order to clearly identify and articulate their needs across Metrics, Activities and People.

Not all elements of MAP should determine your pricing, but chances are that one of them will be the right anchor for your pricing model:

Metrics: Metrics can include things like minutes, messages, meetings, data and storage. What are the key metrics your customers care about? Is there a threshold of value associated with these metrics? By tracking key metrics early on, you’ll be able to understand if growing a certain metric increases value for the customer. For example:

  • Zoom — Minutes: Free with a 40-minute time limit on group meetings.
  • Slack — Messages: Free until 10,000 total messages.
  • Airtable — Records: Free until 1,200 records.

Activity: How do your customers really use your product and how do they describe themselves? Are they creators? Are they editors? Do different customers use your product differently? Instead of metrics, a key anchor for pricing may be the different roles users have within an organization and what they want and need in your product. If you choose to anchor on activity, you will need to align feature sets and capabilities with usage patterns (e.g., creators get access to deeper tooling than viewers, or admins get high privileges versus line-level users). For example:

AWS announces Panorama a device adds machine learning technology to any camera

By Jonathan Shieber

AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.

Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.

Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.

Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.

Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.

As we wrote in 2018:

DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

 

Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.

And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.

Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.

 

Amazon announces a bunch of products aimed at industrial sector

By Ron Miller

One of the areas that is often left behind when it comes to cloud computing is the industrial sector. That’s because these facilities often have older equipment or proprietary systems that aren’t well suited to the cloud. Amazon wants to change that, and today the company announced a slew of new services at AWS re:Invent aimed at helping the industrial sector understand their equipment and environments better.

For starters, the company announced Amazon Monitron, which is designed to monitor equipment and send signals to the engineering team when the equipment could be breaking down. If industrial companies can know when their equipment is breaking, it allows them to repair on it their own terms, rather than waiting until after it breaks down and having the equipment down at what could be an inopportune time.

As AWS CEO Andy Jassy says, an experienced engineer will know when equipment is breaking down by a certain change in sound or a vibration, but if the machine could tell you even before it got that far, it would be a huge boost to these teams.

“…a lot of companies either don’t have sensors, they’re not modern powerful sensors, or they are not consistent and they don’t know how to take that data from the sensors and send it to the cloud, and they don’t know how to build machine learning models, and our manufacturing companies we work with are asking [us] just solve this [and] build an end-to-end solution. So I’m excited to announce today the launch of Amazon Monotron, which is an end-to-end solution for equipment monitoring,” Jassy said.

The company builds a machine learning model that understands what a normal state looks like, then uses that information to find anomalies and send back information to the team in a mobile app about equipment that needs maintenance now based on the data the model is seeing.

For those companies who may have a more modern system and don’t need the complete package that Monotron offers, Amazon has something for these customers as well. If you have modern sensors, but you don’t have a sophisticated machine learning model, Amazon can ingest this data and apply its machine learning algorithms to find anomalies just as it can with Monotron.

“So we have something for this group of customers as well to announce today, which is the launch of Amazon Lookout for Equipment, which does anomaly detection for industrial machinery,” he said.

In addition, the company announced the Panorama Appliance for companies using cameras at the edge who want to use more sophisticated computer vision, but might not have the most modern equipment to do that. “I’m excited to announce today the launch of the AWS Panorama Appliance which is a new hardware appliance [that allows] organizations to add computer vision to existing on premises smart cameras,” Jassy told AWS re:Invent today.

In addition, it also announced a Panorama SDK to help hardware vendors build smarter cameras based on Panorama.

All of these services are designed to give industrial companies access to sophisticated cloud and machine learning technology at whatever level they may require depending on where they are on the technology journey.

AWS updates its edge computing solutions with new hardware and Local Zones

By Frederic Lardinois

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

 

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

 

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

Gift Guide: Camping and backpacking gear that the outdoors lover in your life really wants

By Lucas Matney

Welcome to TechCrunch’s 2020 Holiday Gift Guide! Need help with gift ideas? We’re here to help! We’ll be rolling out gift guides from now through the end of December. You can find our other guides right here.

Like plenty of others, I dug much deeper into the great outdoors and camping this summer amid social distancing restrictions. It’s pretty easy to stay COVID-safe when you’re several days’ wander into the wilderness. Whether it’s a fun day hike, a car camping excursion or a multi-day backcountry backpacking trip, there’s plenty of great camping and hiking gear that can make life easier for the outdoors person (without going overboard).

I bought a ton of camping gear online this year. I had the fortune of timing a 40-mile backpacking excursion through the Los Padres national forest with one of REI’s annual sales, a time when the majority of online camping retailers also tend to offer steep discounts on their stuff. Most of my gear was optimized for backpacking and I ended up replacing most of my decade-old gear with some lighter, better-quality stuff. Backpacking leaves room for fewer luxuries, but add a few car camping trips and you’ll see the fun in bringing in the nice-to-haves to your outdoors gear repertoire.

With camping gear, you can almost always find a good sale on any individual item during the year so stay patient and keep an eye out. Plenty of sites offer one-off discounts for first time buyers or have pretty reliably timed, wide-ranging seasonal sales so if you’re smart about your purchase you can get it at a discount.

One note to hammer home: when buying gear, one of the main things to consider is whether you anticipate getting bit by the backpacking bug. It’s not always easy to tell ahead of time, but if you do think you’ll end up using your gear on backpacking trips, you’ll want to account pretty heavily for the weight of any new gear. You can certainly upgrade later too, but it’s always good to future-proof when you can. If you’re just planning to hop into the car and hit up a nice drive-in campsite, you have a lot less to worry about in terms of size and weight restrictions which makes things much simpler.

These are all things I bought with my own money or am planning to buy at some point, so no sponsored suggestions here. That said, this article contains links to affiliate partners where available. When you buy through these links, TechCrunch may earn an affiliate commission.

Mpowerd Luci Solar String Lights

Image: MPOWERD

When it comes to camping, light can really expand your options for what you can do at night. I’ve been one to rely on campfire light during the evenings but with campfire bans hitting plenty of campsites in California this season, I upped my lighting game this year.

These string lights come in a nicely designed package and are perfect for adding some ambiance and solidly bright light to your campsite. They’re a bit of a luxury but they provide a good bit of light on multiple brightness settings. The company now also makes a version with colored lights if you want to get festive.

They lights do suck up a decent amount of power so they may only last you a night or two on a single charge depending on your usage, but the handy built-in solar charger can help there. Truthfully I’ve always had mixed success with relying on solar charging, so I might save this one for the car-camping trips where you have easy access to somewhere to charge the light with its integrated USB cable.

Price: $28 from Amazon

Sea to Summit dry sacks

Image: Sea to Summit

Though mobile gear is increasingly gaining waterproof IP ratings, especially when it comes to higher-end camera gear, not everything is friendly with moisture. One purchase I made this year that felt like a no-brainer was a set of small dry-bags. They are certainly a more expensive option than the humble zip-locks which I’ve been using for years, but while a wet roll of toilet paper or map can be a bummer, a wet mirrorless camera is a disaster.

Dry bags keep the wetness out. They’re also just a nice and functional organizational tool to keep all of your tech gear together and protected from the elements. Earlier this summer I bought this small 3-pack which is sized perfectly for the tech gear I tend to bring along. Later I bought a much larger 35L sack to house gear like my sleeping bag and clothes that I really need to keep dry while hiking through river beds or while it’s raining.

I opted for a set of Sea to Summit bags which seem to be the gold standard, but if you search for dry bags on the web, you’ll come across plenty of sets with some good ratings. Just be sure to peruse the reviews to get a sense of their durability which is the only thing that matters.

Price: $43 from REI

Garmin inReach Explorer+

Image: Garmin

I have two big items on my next wish list for backpacking gear upgrades to make before next season. One is a bear can to stuff my food and toiletries into when backpacking through Tahoe’s Desolation Wilderness as I soon hope to. The other is the inReach Explorer+. I’ve relied on friends with handheld GPS units in the past but Garmin’s option, which seems to be quite popular, bundles a GPS unit with a phone that operates on a satellite network.

You need a plan for the device to use the satellite network, which you can activate on a monthly basis whenever you need it. That network is good for a couple things: sending off text messages with GPS points to friends and relatives so they can see your progress and know you’re safe, while also being able to reach the outside world if you find yourself in an emergency and might need to be rescued. While these evacuations are assuredly going to be a pricey affair, it’s never worth gambling with your life or opting for a backup plan that you might not make it back from.

Garmin also sells a mini version of the inReach that eschews GPS navigation and a decent screen size for a much smaller footprint, more of a “don’t use it unless you absolutely need to” version. I will also quickly note that satellite phones are actually illegal to have on you in some countries so be sure to check out whether that’s the case before you pack one in your bag.

Price: $450 from Garmin

Helinox Chair Zero

Image: Helinox

These chairs are probably some of the best things I’ve ever purchased. Oddly, I actually haven’t used them that much while backpacking, which seems to be the intent of the product given how light they are at just over 1 pound, but they’ve been amazing for tossing in a tote bag for a day at the park.

I’ve gotten so much use of them partially because I live in a city and don’t own a car. If I had a car, I might just opt for a larger and cheaper folding chair that I could keep in the trunk. That said, what’s great about these is that they are light enough to bring backpacking — though they are definitely still a luxury item to bring along. My one complaint is that these chairs don’t play so nicely with the sand or mud so you want to find a fairly hard surface to set them up on if you want to feel fully secure placing your full weight on these tiny chairs.

I got these for about half-off when I bought them, but there are definitely cheaper options than those from Helinox if you can’t find a deal and don’t mind an extra pound of weight or so. I have friends who are particularly big fans of the REI versions.

Side note: this year I also found a deal on a lightweight Helinox hard-top table which has been great for playing board games on or setting up a cook station.

Price: $150 from Amazon

A giant duffel bag

Image: REI

One of the big issues with amassing a collection of camping gear is storing it all during those non-camping months. The best solution for this is a big ‘ole duffel bag. They’re great to store your gear in, and it’s so easy to just toss a duffel in your car when you’re ready to go camping and not have to deal with a dozen little trips to the car and back.

I ended up buying a 90-liter REI bag during a sale, but I’ve seen great things on the bags from North Face and Patagonia as well. This size fits a ton and has the added advantage of being just about the maximum size for a standard checked bag on a flight, anything larger will require an oversized baggage fee. These bags all go on sale in pretty often so I wouldn’t rush into buying especially if you don’t need one ASAP.

Price: Varies; the one above is $140 right now from REI

Travel chess set

Image: Kidami

Cards are great, but sometimes you want to spice up your options for games. For those of you who have just binged through Queen’s Gambit, I’ll recommend searching for a good travel chess board.

I ended up going for this very random travel chess set on Amazon because the magnetic board made me feel confident I wouldn’t lose all of the pieces immediately. It’s not the most high quality-feeling but the price was right and it strikes a good balance. There are definitely plenty of options that are more robust or more lightweight.

Price: $18 from Amazon

Nalgene Mini Bottles

Image: REI

One thing every camper should have in their gear collection is a bunch of different sized mini Nalgene bottles. These things are great and can hold your soap, shampoo, oil, sauces, booze and other liquids securely and (as long as you’re religious about tightening the screw-top bottles) can ensure that you won’t have any accidental spills.

I use these aggressively for meal planning and measure out the various quantities of a liquid or sauce I’ll need for a given meal and toss them inside a bigger plastic bag with all of the ingredients. As such, I have a few sizes ranging from an ounce to 4 ounces. That’s not a use case everyone needs when you’re car-camping and don’t have the luxury of measuring everything ahead of time, but they’re also awesome for toiletry kits and I use the 2 ounce bottle for shampoo and soap when I’m flying and want to bring my own stuff.

One complaint is that these will hold onto the smell of some more pungent liquids even after you wash them so keep that in mind and maybe be careful to separate the ones you’re storing your toiletries in from the ones holding sauces.

Price: $2 from REI

Who’s building the grocery store of the future?

By Walter Thompson
Christopher Wan Contributor
Chris is a venture fellow at Bessemer Venture Partners and a JD/MBA candidate at Stanford University who writes a weekly newsletter about tech, policy and business strategy.

The future of grocery stores will be a win-win for both stores and customers.

On one hand, stores want to decrease their operational expenditures that come from hiring cashiers and conducting inventory management. On the other hand, consumers want to decrease the friction of buying groceries. This friction includes both finding high-quality groceries at consumers’ personal price points and waiting in long lines for checkout. The future of grocery stores promises to alleviate, and even eliminate, these points of friction.

Amazon’s foray into grocery store technology provides a succinct introduction into the state of the industry. Amazon’s first act was its Amazon Go store, which opened in Seattle in early 2018. When customers enter an Amazon Go store, they swipe the Amazon app at the entrance, enabling Amazon to link purchases to their accounts. As they shop, a collection of ceiling cameras and shelf sensors identify the items and places them in a a virtual shopping cart. When they’re done shopping, Amazon automatically charges for the items they grabbed.

Earlier this year, Amazon opened a 10,400-square-foot Go store, about five times bigger than the largest prior location. At larger store sizes, however, tracking people and products gets more computationally complex and larger SKU counts become more difficult to manage. This is especially true if the computer vision AI-based system also must be retrofitted into buildings that come with nooks and crannies that can obstruct camera angles and affect lighting.

Perhaps Amazon’s confidence in its ability to scale its Go stores comes from vertical integration that enables it to optimize customer experiences through control over store format, product selection and placement.

While Amazon Go is vertically integrated, in Amazon’s second act, it revealed a separate, more horizontal strategy: Earlier this year, Amazon announced that it would license its cashierless Just Walk Out technology.

In Just Walk Out-enabled stores, shoppers enter the store using a credit card. They don’t need to download an app or create an Amazon account. Using cameras and sensors, the Just Walk Out technology detects which products shoppers take from or return to the shelves and keeps track of them. When done shopping, as in an Amazon Go store, customers can “just walk out” and their credit card will be charged for the items in their virtual cart.

Just Walk Out may enable Amazon to penetrate the market much more quickly, as Amazon promises that existing stores can be retrofitted in “as little as a few weeks.” Amazon can also get massive amounts of data to improve its computer vision systems and machine learning algorithms, accelerating the speed with which it can leverage those capabilities elsewhere.

In Amazon’s third and latest act, Amazon in July announced its Dash Cart, a departure from its two prior strategies. Rather than equipping stores with ceiling cameras and shelf sensors, Amazon is building smart carts that use a combination of computer vision and sensor fusion to identify items placed in the cart. Customers take barcoded items off shelves, place them in the cart, wait for a beep, and then one of two things happens: Either the shopper gets an alert telling him to try again, or the shopper receives a green signal to confirm the item was added to the cart correctly.

For items that don’t have a barcode, the shopper can add them to the cart by manually adding them on the cart screen and confirming the measured weight of the product. When a customer exits through the store’s Amazon Dash Cart lane, sensors automatically identify the cart, and payment is processed using the credit card on the customer’s Amazon account. The Dash Cart is specifically designed for small- to medium-sized grocery trips that fit two grocery bags and is currently only available in an Amazon Fresh store in California.

The pessimistic interpretation of Amazon’s foray into grocery technology is that its three strategies are mutually incompatible, reflecting a lack of conviction on the correct strategy to commit to. Indeed, the vertically integrated smart store strategy suggests Amazon is willing to incur massive fixed costs to optimize the customer experience. The modular smart store strategy suggests Amazon is willing to make the tradeoff in customer experience for faster market penetration.

The smart cart strategy suggests that smart stores are too complex to capture all customer behaviors correctly, thus requiring Amazon to restrict the freedom of user behavior. The more charitable interpretation, however, is that, well, Amazon is one of the most customer-centric companies in the world, and it has the capital to experiment with different approaches to figure out what works best.

While Amazon serves as a helpful case study to the current state of the industry, many other players exist in the space, all using different approaches to build an aspect of the grocery store of the future.

Cashierless checkout

According to some estimates, people spend more than 60 hours per year standing in checkout lines. Cashierless checkout changes everything, as shoppers are immediately identified upon entry and can grab products from the shelf and leave the store without having to interact with a cashier. Different companies have taken different approaches to cashierless checkout:

Smart shelves: Like Amazon Go, some companies utilize computer vision mounted on ceilings and advanced sensors on shelves to detect when shoppers take an item from the shelf. Companies associate the correct item with the correct shopper, and the shopper is charged for all the items they grabbed when they are finished with their shopping journey. Standard Cognition, Zippin and Trigo are some of the leaders in computer vision and smart shelf technology.

Smart carts and baskets: Like Amazon’s Dash Cart, some companies are moving the AI and the sensors from the ceilings and shelves to the cart. When a shopper places an item in their cart, the cart can detect exactly which item was placed and the quantity of that item. Caper Labs, for instance, is pursuing a smart cart approach. Its cart has a credit card reader for the customer to checkout without a cashier.

Touchless checkout kiosks: Touchless checkout kiosk stations use overhead cameras that verify and charge a customer for their purchase. For instance, Mashgin built a kiosk that uses computer vision to quickly verify a customer’s items when they’re done shopping. Customers can then pay using a credit card without ever having to scan a barcode.

Self-scanning: Some companies still require customers to scan items themselves, but once items are scanned, checkout becomes quick and painless. Supersmart, for instance, built a mobile app for customers to quickly scan products as they add them to their carts. When customers are finished shopping, they scan a QR code at a Supersmart kiosk, which verifies that the items in the cart match the items scanned using the mobile app. Amazon’s Dash Cart, described above, also requires a level of human involvement in manually adding certain items to the cart.

Notably, even with the approaches detailed above, cashiers may not be going anywhere just yet because they still play important roles in the customer shopping experience. Cashiers, for instance, help to bag a customer’s items quickly and efficiently. Cashiers can also conduct random checks of customer’s bags as they leave the store and check IDs for alcohol purchases. Finally, cashiers also can untangle tricky corner cases where automated systems fail to detect or validate certain shoppers’ carts. Grabango and FutureProof are therefore building hybrid cashierless checkout systems that keep a human in the loop.

Advanced software analytics

Google launches Android Enterprise Essentials, a mobile device management service for small businesses

By Sarah Perez

Google today introduced a new mobile management and security solution, Android Enterprise Essentials, which, despite its name, is actually aimed at small to medium-sized businesses. The company explains this solution leverages Google’s experience in building Android Enterprise device management and security tools for larger organizations in order to come up with a simpler solution for those businesses with smaller budgets.

The new service includes the basics in mobile device management, with features that allow smaller businesses to require their employees to use a lock screen and encryption to protect company data. It also prevents users from installing apps outside the Google Play Store via the Google Play Protect service, and allows businesses to remotely wipe all the company data from phones that are lost or stolen.

As Google explains, smaller companies often handle customer data on mobile devices, but many of today’s remote device management solutions are too complex for small business owners, and are often complicated to get up-and-running.

Android Enterprise Essentials attempts to make the overall setup process easier by eliminating the need to manually activate each device. And because the security policies are applied remotely, there’s nothing the employees themselves have to configure on their own phones. Instead, businesses that want to use the new solution will just buy Android devices from a reseller to hand out or ship to employees with policies already in place.

Though primarily aimed at smaller companies, Google notes the solution may work for select larger organizations that want to extend some basic protections to devices that don’t require more advanced management solutions. The new service can also help companies get started with securing their mobile device inventory, before they move up to more sophisticated solutions over time, including those from third-party vendors.

The company has been working to better position Android devices for use in workplace over the past several years, with programs like Android for Work, Android Enterprise Recommended, partnerships focused on ridding the Play Store of malware, advanced device protections for high-risk users, endpoint management solutions, and more.

Google says it will roll out Android Enterprise Essentials initially with distributors Synnex in the U.S. and Tech Data in the U.K. In the future, it will make the service available through additional resellers as it takes the solution global in early 2021. Google will also host an online launch event and demo in January for interested customers.

YC-backed BuildBuddy raises $3.15M to help developers build software more quickly

By Alex Wilhelm

BuildBuddy, whose software helps developers compile and test code quickly using a blend of open-source technology and proprietary tools, announced a funding round today worth $3.15 million. 

The company was part of the Winter 2020 Y Combinator batch, which saw its traditional demo day in March turned into an all-virtual affair. The startups from the cohort then had to raise capital as the public markets crashed around them and fear overtook the startup investing world.

BuildBuddy’s funding round makes it clear that choppy market conditions and a move away from in-person demos did not fully dampen investor interest in YC’s March batch of startups, though it’s far too soon to tell if the group will perform as well as others, given how long it takes for startup winners to mature into exits.

Let’s talk code

BuildBuddy has foundations in how Google builds software. To get under the skin of what it does, I got ahold of co-founder Siggi Simonarson, who worked at the Mountain View-based search giant for a little over a half decade.

During that time he became accustomed to building software in the Google style, namely using its internal tool called Blaze to compile his code. It’s core to how developers at Google work, Simonarson told TechCrunch. “You write some code,” he added, “you run Blaze build; you write some code, you run Blaze test.”

What sets Blaze apart from other developer tools is that “opposed to your traditional language-specific build tools,” Simonarson said, it’s code agnostic, so you can use it to “build across [any] programming language.”

Google open-sourced the core of Blaze, which was named Bazel, an anagram of the original name.

So what does BuildBuddy do? In product terms, it’s building the pieces of Blaze that Google engineers have access to inside the company, for other developers using Bazel in their own work. In business terms, BuildBuddy wants to offer its service to individual developers for free, and charge companies that use its product.

Simonarson and his co-founder Tyler Williams started small, building a “results UI” tool that they shared with a Bazel user group. The members of that group picked up the tool, rapidly bringing it inside a number of sizable companies.

This origin story underlines something that BuildBuddy has that early-stage startups often lack, namely demonstrable enterprise market appetite. Lots of big companies use Bazel to help create software, and BuildBuddy found its way into a few of them early in its life.

Simply building a useful tool for a popular open-source project is no guarantee of success, however. Happily for BuildBuddy, early users helped it set direction for its product development, meaning that over the summer the startup added the features that its current users most wanted. 

Simonarson explained that after BuildBuddy was initially used by external developers, they demanded additional tools, like authentication. In the words of the co-founder, the response from the startup was “great!” The same went for a request for dashboarding, and other features.

Even better for the YC graduate, some of the features requested were the sort that it intends to charge for. That brings us back to money and the round itself.

Money

BuildBuddy closed its round in May. But like with most venture capital tales, it’s not a simple story.

According to Simonarson, his startup started raising the round during one of those awful early-COVID days when the stock market dropped by double-digit percentage points in a single trading session. 

BuildBuddy’s goal was to raise $1.5 million. Simonarson was worried at the time, telling TechCrunch that it was his first time fundraising, and that he wasn’t sure if his startup was going to “raise anything at all” in that climate. 

But the nascent company secured its first $100,000 check. And then a $300,000 check, over time managing to fill out its round.

So what happened that got the company from $1.5 million to just over $3 million? The investor that put in $300,000 wanted to put in another $2 million. The company talked them down to $1.5 million at a higher cap (BuildBuddy raised its round using a SAFE), and the deal was done at those terms.

The startup initially didn’t want to raise the extra cash, but Simonarson told TechCrunch that at the time it was not clear where the fundraising environment was heading; BuildBuddy raised back when startup layoffs were a leading story, and a return to high-cadence VC rounds was months away. 

So BuildBuddy wound up securing $3.15 million to support a current headcount of four. It intends to hire, naturally, lower its comically long runway and keep building out its Bazel-focused service.

Picking a few names from the investor spreadsheet that BuildBuddy sent over — points for completeness to the startup — Y Combinator, Addition, Scribble and Village Global, among others put capital into the round.

Dev tools are hot at the moment. Given that, as soon as BuildBuddy’s ARR starts to get moving, I expect we’ll hear from them again.

AWS adds natural language search service for business intelligence from its data sets

By Jonathan Shieber

When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.

At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.

Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.

“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.

That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.

“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”

It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”

AWS announces DevOps Guru to find operational issues automatically

By Ron Miller

At AWS re:Invent today, Andy Jassy announced DevOps Guru, a new tool for DevOps teams to help the operations side find issues that could be having an impact on an application performance. Consider it like the sibling of CodeGuru, the service the company announced last year to find issues in your code before you deploy.

It works in a similar fashion using machine learning to find issues on the operations side of the equation. “I’m excited to launch a new service today called Amazon DevOps Guru, which is a new service that uses machine learning to identify operational issues long before they impact customers,” Jassy said today.

The way it works is that it collects and analyzes data from application metrics, logs, and events “to identify behavior that deviates from normal operational patterns,” the company explained in the blog post announcing the new service.

This service essentially gives AWS a product that would be competing with companies like Sumo Logic, DataDog or Splunk by providing deep operational insight on problems that could be having an impact on your application such as misconfigurations or resources that are over capacity.

When it finds a problem, the service can send an SMS, Slack message or other communication to the team and provides recommendations on how to fix the problem as quickly as possible.

What’s more, you pay for the data analyzed by the service, rather than a monthly fee. The company says this means that there is no upfront cost or commitment involved.

Voi, the European ‘micro mobility’ rental company, raises $160M additional equity and debt funding

By Steve O'Hear

Voi, the Stockholm-headquartered micro mobility company known for its e-scooter rentals, has raised $160 million in new funding. The round, about two thirds equity and one third debt, is led by The Raine Group.

Others participating include VNV Global, Balderton, Creandum, Project A, Inbox, and “sustainability-focused investor” Stena Sessan, along with individual backers with links to tech companies such as Delivery Hero, Klarna, iZettle, Zillow, Kry/Livi and Amazon.

Voi co-founder and CEO Fredrik Hjelm says the company — which competes with the likes of Bird, Tier, Bolt and Lime — has secured an “asset-backed” debt facility tied to the scooters and e-bikes it will have on its books in 2021.

The idea is that, having proven its model can be sustained, capital funnelled into the expense of purchasing the vehicles needed to expand the service, can be secured against those assets, even if they will depreciate relatively quickly over time.

“I think, going forward, we will increase the debt ratio to equity,” he tells me. “What you wanna avoid, of course, as a startup, is dilution. We want as much debt as possible because we want cash to grow because we think we can have good ROI in capital. But the debt market is usually closed for startups, until they get to a very proven business model”.

Hjelm says, as the unit economics improved, which Voi has shown by becoming operationally profitable for a few months this year on a group level, it puts the company in a position where, coupled with enough historical data, it can understand “the payback” time on vehicles. This means a financing model similar to rental car companies, or other companies with assets that have a proven value, becomes more of a possibility.

Once it’s proven to work, he says in 6-9 months from now Voi hopes to be able to increase the debt facility. “Probably you will never write about Voi raising equity again,” Hjelm teases, likely in reference to my scooping one of the company’s earlier funding rounds.

By thinking about and funding the vehicles and the operations as two separate parts of the business, it also points to where the Voi founder believes the industry and his company in particular, is heading. “I think the direction we’re going is, we’re becoming more and more of a tech enabled infrastructure company,” he says, comparing it to a telco or other infrastructure plays.

This makes more sense when you consider that many cities around the world are holding tendering processes and only licensing two or three and sometimes only a single provider. And it’s here where Voi has also made good transaction over the last year — sped by the Coronavirus pandemic which has forced cities to open up micro mobility services faster in order to offer an alternative to packed trains and busses.

“With major new markets, including the U.K. opening up to e-scooter mobility solutions, Voi has become Europe’s preferred operator, winning over 2/3 of city license tenders across Europe, including recent wins in Birmingham, Liverpool, Bern and Cambridge,” says Voi.

A decision on which operators are awarded London’s tender is expected on December 14th. Up to three operators will be selected to operate trials, which are due to start in Spring 2021.

Voi says the new funding will be used to invest in technology platform development, fuel growth in current Voi markets and bring Voi’s latest e-scooter model — Voiager 4 — to more cities. In addition, Voi will use funds to further enhance the safety infrastructure of its platform, “the company’s number one priority,” says the company.

AWS launches SageMaker Data Wrangler, a new data preparation service for machine learning

By Frederic Lardinois

AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.

AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.

As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.

Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.

All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.

 

It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.

WaveOne aims to make video AI-native and turn streaming upside down

By Devin Coldewey

Video has worked the same way for a long, long time. And because of its unique qualities, video has been largely immune to the machine learning explosion upending industry after industry. WaveOne hopes to change that by taking the decades-old paradigm of video codecs and making them AI-powered — while somehow avoiding the pitfalls that would-be codec revolutionizers and “AI-powered” startups often fall into.

The startup has until recently limited itself to showing its results in papers and presentations, but with a recently raised $6.5M seed round, they are ready to move towards testing and deploying their actual product. It’s no niche: video compression may seem a bit in the weeds to some, but there’s no doubt it’s become one of the most important processes of the modern internet.

Here’s how it’s worked pretty much since the old days when digital video first became possible. Developers create a standard algorithm for compressing and decompressing video, a codec, which can easily be distributed and run on common computing platforms. This is stuff like MPEG-2, H.264, and that sort of thing. The hard work of compressing a video can be done by content providers and servers, while the comparatively lighter work of decompressing is done on the end user’s machines.

This approach is quite effective, and improvements to codecs (which allow more efficient compression) have led to the possibility of sites like YouTube. If videos were 10 times bigger, YouTube would never have been able to launch when it did. The other major change was beginning to rely on hardware acceleration of said codecs — your computer or GPU might have an actual chip in it with the codec baked in, ready to perform decompression tasks with far greater speed than an ordinary general-purpose CPU in a phone. Just one problem: when you get a new codec, you need new hardware.

But consider this: many new phones ship with a chip designed for running machine learning models, which like codecs can be accelerated, but unlike them the hardware is not bespoke for the model. So why aren’t we using this ML-optimized chip for video? Well, that’s exactly what WaveOne intends to do.

I should say that I initially spoke with WaveOne’s cofounders, CEO Lubomir Bourdev and CTO Oren Rippel, from a position of significant skepticism despite their impressive backgrounds. We’ve seen codec companies come and go, but the tech industry has coalesced around a handful of formats and standards that are revised in a painfully slow fashion. H.265, for instance, was introduced in 2013, but years afterwards its predecessor, H.264, was only beginning to achieve ubiquity. It’s more like the 3G, 4G, 5G system than version 7, version 7.1, etc. So smaller options, even superior ones that are free and open source, tend to get ground beneath the wheels of the industry-spanning standards.

This track record for codecs, plus the fact that startups like to describe practically everything is “AI-powered,” had me expecting something at best misguided, at worst scammy. But I was more than pleasantly surprised: In fact WaveOne is the kind of thing that seems obvious in retrospect and appears to have a first-mover advantage.

The first thing Rippel and Bourdev made clear was that AI actually has a role to play here. While codecs like H.265 aren’t dumb — they’re very advanced in many ways — they aren’t exactly smart, either. They can tell where to put more bits into encoding color or detail in a general sense, but they can’t, for instance, tell where there’s a face in the shot that should be getting extra love, or a sign or trees that can be done in a special way to save time.

But face and scene detection are practically solved problems in computer vision. Why shouldn’t a video codec understand that there is a face, then dedicate a proportionate amount of resources to it? It’s a perfectly good question. The answer is that the codecs aren’t flexible enough. They don’t take that kind of input. Maybe they will in H.266, whenever that comes out, and a couple years later it’ll be supported on high-end devices.

So how would you do it now? Well, by writing a video compression and decompression algorithm that runs on AI accelerators many phones and computers have or will have very soon, and integrating scene and object detection in it from the get-go. Like Krisp.ai understanding what a voice is and isolating it without hyper-complex spectrum analysis, AI can make determinations like that with visual data incredibly fast and pass that on to the actual video compression part.

Image Credits: WaveOne

Variable and intelligent allocation of data means the compression process can be very efficient without sacrificing image quality. WaveOne claims to reduce the size of files by as much as half, with better gains in more complex scenes. When you’re serving videos hundreds of millions of times (or to a million people at once), even fractions of a percent add up, let alone gains of this size. Bandwidth doesn’t cost as much as it used to, but it still isn’t free.

Understanding the image (or being told) also lets the codec see what kind of content it is; a video call should prioritize faces if possible, of course, but a game streamer may want to prioritize small details, while animation requires yet another approach to minimize artifacts in its large single-color regions. This can all be done on the fly with an AI-powered compression scheme.

There are implications beyond consumer tech as well: A self-driving car, sending video between components or to a central server, could save time and improve video quality by focusing on what the autonomous system designates important — vehicles, pedestrians, animals — and not wasting time and bits on a featureless sky, trees in the distance, and so on.

Content-aware encoding and decoding is probably the most versatile and easy to grasp advantage WaveOne claims to offer, but Bourdev also noted that the method is much more resistant to disruption from bandwidth issues. It’s one of the other failings of traditional video codecs that missing a few bits can throw off the whole operation — that’s why you get frozen frames and glitches. But ML-based decoding can easily make a “best guess” based on whatever bits it has, so when your bandwidth is suddenly restricted you don’t freeze, just get a bit less detailed for the duration.

Example of different codecs compressing the same frame.

These benefits sound great, but as before the question is not “can we improve on the status quo?” (obviously we can) but “can we scale those improvements?”

“The road is littered with failed attempts to create cool new codecs,” admitted Bourdev. “Part of the reason for that is hardware acceleration; even if you came up with the best codec in the world, good luck if you don’t have a hardware accelerator that runs it. You don’t just need better algorithms, you need to be able to run them in a scalable way across a large variety of devices, on the edge and in the cloud.”

That’s why the special AI cores on the latest generation of devices is so important. This is hardware acceleration that can be adapted in milliseconds to a new purpose. And WaveOne happens to have been working for years on video-focused machine learning that will run on those cores, doing the work that H.26X accelerators have been doing for years, but faster and with far more flexibility.

Of course, there’s still the question of “standards.” Is it very likely that anyone is going to sign on to a single company’s proprietary video compression methods? Well, someone’s got to do it! After all, standards don’t come etched on stone tablets. And as Bourdev and Rippel explained, they actually are using standards — just not the way we’ve come to think of them.

Before, a “standard” in video meant adhering to a rigidly defined software method so that your app or device could work with standards-compatible video efficiently and correctly. But that’s not the only kind of standard. Instead of being a soup-to-nuts method, WaveOne is an implementation that adheres to standards on the ML and deployment side.

They’re building the platform to be compatible with all the major ML distribution and development publishers like TensorFlow, ONNX, Apple’s CoreML, and others. Meanwhile the models actually developed for encoding and decoding video will run just like any other accelerated software on edge or cloud devices: deploy it on AWS or Azure, run it locally with ARM or Intel compute modules, and so on.

It feels like WaveOne may be onto something that ticks all the boxes of a major b2b event: it invisibly improves things for customers, runs on existing or upcoming hardware without modification, saves costs immediately (potentially, anyhow) but can be invested in to add value.

Perhaps that’s why they managed to attract such a large seed round: $6.5 million, led by Khosla Ventures, with $1M each from Vela Partners and Incubate Fund, plus $650K from Omega Venture Partners and $350K from Blue Ivy.

Right now WaveOne is sort of in a pre-alpha stage, having demonstrated the technology satisfactorily but not built a full-scale product. The seed round, Rippel said, was to de-risk the technology, and while there’s still lots of R&D yet to be done, they’ve proven that the core offering works — building the infrastructure and API layers comes next and amounts to a totally different phase for the company. Even so, he said, they hope to get testing done and line up a few customers before they raise more money.

The future of the video industry may not look a lot like the last couple decades, and that could be a very good thing. No doubt we’ll be hearing more from WaveOne as it migrates from lab to product.

AWS announces high resource Lambda functions, container image support & millisecond billing

By Ron Miller

AWS announced some big updates to its Lambda serverless function service today. For starters, starting today it will be able to deliver functions with up to 10MB of memory and 6 vCPUs (virtual CPUs). This will allow developers building more compute-intensive functions to get the resources they need.

“Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment,” the company wrote in a blog post announcing the new capabilities.

Serverless computing doesn’t mean there are no servers. It means that developers no longer have to worry about the compute, storage and memory requirements because the cloud provider — in this case, AWS — takes care of it for them, freeing them up to just code the application instead of deploying resources.

Today’s announcement combined with support for support for the AVX2 instruction set, means that developers can use this approach with more sophisticated technologies like machine learning, gaming and even high performance computing.

One of the beauties of this approach is that in theory you can save money because you aren’t paying for resources you aren’t using. You are only paying each time the application requires a set of resources and no more. To make this an even bigger advantage, the company also announced, “Starting today, we are rounding up duration to the nearest millisecond with no minimum execution time,” the company announced in a blog post on the new pricing approach.

Finally the company also announced container image support for Lambda functions. “To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size. In this way, you can also easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads,” the company wrote in a blog post announcing the new capability.

All of these announcements in combination mean that you can now use Lambda functions for more intensive operations than you could previously, and the new billing approach should lower your overall spending as you make that transition to the new capabilities.

AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

By Jonathan Shieber

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

AWS goes after Microsoft’s SQL Server with Babelfish for Aurora PostgreSQL

By Frederic Lardinois

AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.

What Babelfish does is provide a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will (though they’ll still have to migrate their existing data). It provides translations for the dialect, but also SQL commands,  cursors, catalog views, data types, triggers, stored procedures and functions.

The promise here is that companies won’t have to replace their database drivers or rewrite and verify their database requests to make this transition.

“We believe Babelfish stands out because it’s not another migration service, as useful as those can be. Babelfish enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements,” AWS’s Matt Asay writes in today’s announcement. “This means much faster ‘migrations’ with minimal developer effort. It’s also centered on ‘correctness,’ meaning applications designed to use SQL Server functionality will behave the same on PostgreSQL as they would on SQL Server.”

PostgreSQL, AWS rightly points out, is one of the most popular open-source databases in the market today. A lot of companies want to migrate their relational databases to it — or at least use it in conjunction with their existing databases. This new service is going to make that significantly easier.

The open-source Babelfish project will launch in 2021 and will be available on GitHub under the Apache 2.0 license.

“It’s still true that the overwhelming majority of relational databases are on-premise,” AWS CEO Andy Jassy said. “Customers are fed up with and sick of incumbents.” As is tradition at re:Invent, Jassy also got a few swipes at Oracle into his keynote, but the real target of the products the company is launching in the database area today is clearly Microsoft.

Floww raises $6.7M for its data-driven marketplace matching founders with investors, based on merit

By Mike Butcher

Floww – a data-driven marketplace designed to allow founders to pitch investors, with the whole investment relationship managed online – says it has raised $6.7M / £5M to date in Seed funding from angels and family offices. Investors include Ramon Mendes De Leon, Duncan Simpson Craib, Angus Davidson, Stephane Delacote and Pip Baker (Google’s Head of Fintech UK) and multiple Family Offices. The cash will be used to build out the platform designed to give startups access to over 500+ VCs, accelerators and angel networks.

The team consists of Martijn De Wever, founder and CEO of London based VC Force Over Mass; Lee Fasciani, cofounder of Territory Projects (the firm behind film graphics and design including Guardians of the Galaxy and BladeRunner 2049); and CTO Alex Pilsworth, of various Fintech startups.

Having made over 160 investments himself, De Wever says he recognized the need for a platform connecting investors and startups based on merit, clean data, and transparency, rather than a system built on “warm introductions” which can have inherent cultural and even racial biases.

Floww’s idea is that it showcases startups based on merit only, allowing founders to raise capital by providing investors with data and transparency. Startups are given a suite of tools and materials to get started, from cap table templates to ‘How To’ guides. Founders can then ‘drag and drop’ their investor documents in any format. Floww’s team of accountants then cross-checks the data for errors and process key performance metrics. A startup’s digital profile includes dynamic charts and tables, allowing prospective investors to see the company’s business potential.

Floww charges a monthly fee to VCs, accelerators, family offices and PE firms. Startups have free access to the platform, and a premium model to contact and send their deal to multiple VCs.

Floww’s pitch is that VCs can, in turn, manage deal-sourcing, CRM, as well as reporting to their investors and LPs. Quite a claim, given all VCs to-date handle this kind of thing in-house. However, Floww claims to have processed 3,000 startups and says it is rolling out to over 500 VC’s.

In a statement, De Wever said: “In an age of virtual meetings and connections, the need for coffee meetings on Sand Hill Road or Mayfair is gone. What we need now are global connections, allowing VCs to engage in merit-based investing using data and metrics.” He says the era of the Coronavirus pandemic means many deals will have to be sourced remotely now, so “the time for a platform like this is now.”

AngelList is perhaps its closest competitor from the startup perspective. And the VC application incorporates the kind of functionality seen in Affinity, Airtable, Efront and Docsend. But AngeList doesn’t provide data or metrics.

AWS brings ECS, EKS services to the data center, open sources EKS

By Ron Miller

Today at AWS re:Invent, Andy Jassy talked a lot about how companies are making a big push to the cloud, but today’s container-focussed announcements gave a big nod to the data center as the company announced ECS Anywhere and EKS Anywhere, both designed to let you run these services on-premises, as well as in the cloud.

These two services, ECS for generalized container orchestration and EKS for that’s focused on Kubernetes will let customers use these popular AWS services on premises. Jassy said that some customers still want the same tools they use in the cloud on prem and this is designed to give it to them.

Speaking of ECS he said,  “I still have a lot of my containers that I need to run on premises as I’m making this transition to the cloud, and [these] people really want it to have the same management and deployment mechanisms that they have in AWS also on premises and customers have asked us to work on this. And so I’m excited to announce two new things to you. The first is the launch, or the announcement of Amazon ECS anywhere, which lets you run ECS and your own data center,” he told the re:Invent audience.

Image Credits: AWS

He said it gives you the same AWS API’s and cluster configuration management pieces. This will work the same for EKS, allowing this single management methodology regardless of where you are using the service.

While it was at it, the company also announced it was open sourcing EKS, its own managed Kubernetes service. The idea behind these moves is to give customers as much flexibility as possible, and recognizing what Microsoft, IBM and Google have been saying, that we live in a multi-cloud and hybrid world and people aren’t moving everything to the cloud right away.

In fact, in his opening Jassy stated that right now in 2020, just 4% of worldwide IT spend is on the cloud. That means there’s money to be made selling services on premises, and that’s what these services will do.

Find out how we’re working toward living and working in space at TC Sessions: Space 2020

By Darrell Etherington

The idea of people going to live and work in space, outside of the extremely unique case of the International Space Station, has long been the strict domain of science fiction. That’s changing fast, however, with public space agencies, private companies and the scientific community all looking at ways of making it safe for people to live and work in space for longer periods – and broadening accessibility of space to people who don’t necessarily have the training and discipline of dedicated astronauts.

At TC Sessions: Space on December 16 & 17, we’ll be talking to some of the people who want to make living and working in space a reality, and who are paving the way for the future of both commercial and scientific human space activity. Those efforts range from designing the systems people will need for staying safe and comfortable on long spaceflights, to ideating and developing the technologies needed for long-term stays on the surface of worlds that are far less hospitable to life than Earth, like the Moon and Mars.

We’re thrilled to have Janet Kavandi from Sierra Nevada Corporation, Melodie Yashar from SEArch+, Nujoud Mercy from NASA and Axiom’s Amir Blachman joining us at TC Sessions: Space on December 16 &17 to chat about the future of human space exploration and commercial activity.

Janet Kavandi is Executive Vice President of Space Systems at the Sierra Nevada Corporation. She was selected as a NASA astronaut in 1994 as a member of the fifteenth class of U.S. astronauts. She completed three space flights in which she supported space station payload integration, capsule communications and robotics. She went on to serve as director of flight crew operations at NASA’s Johnson Space Center and then as director of NASA’s Glenn Research Center, where she directed cutting-edge research on aerospace and aeronautical propulsion, power and communication technologies. She retired from NASA in 2019 after 25 years of service.

Melodie Yashar is a design architect, technologist, and researcher. She is co-founder of Space Exploration Architecture (SEArch+), a group developing human-supporting concepts for space exploration. SEArch+ won top prize in both of NASA’s design solicitations for a Mars habitat within the 3D-Printed Habitat Challenge. The success of the team’s work in NASA’s Centennial Challenge led to consultancy roles and collaborations with UTAS/Collins Aerospace, NASA Langley, ICON, NASA Marshall, and others.

Nujoud Merancy is a systems engineer with extensive background in human spaceflight and spacecraft at NASA Johnson Space Center. She is currently the Chief of the Exploration Mission Planning Office responsible for the team of engineers and analysts designing, developing, and integrating NASA’s human spaceflight portfolio beyond low earth orbit. These missions include planning for the Orion Multi-Purpose Crew Vehicle, Space Launch System, Exploration Ground Systems, Gateway, and Human Landing System.

Amir Blachman is Chief Business Officer at Axiom, a pioneering company in the realm of commercializing space and building the first generation of private commercial space stations. He spent most of his career investing in and leading early stage companies. Before joining Axiom as the company’s first employee, he managed a syndicate of 120 space investors in 11 countries. Through this syndicate, he funded lunar landers, communication networks, Earth imaging satellites, antennae and exploration technologies.

In order to hear from these experts, you’ll need to pick up your ticket to TC Sessions: Space, which will also include video on demand for all sessions, which means you won’t have to miss a minute of expert insight, tips and trend spotting from the top founders, investors, technologists, government officials and military minds across public, private and defense sectors. There are even discounts for groups, students and military/government officials.

You’ll find panel discussions, interviews, fireside chats and interactive Q&As on a range of topics: mineral exploration, global mapping of the Earth from space, deep tech software, defense capabilities, 3D-printed rockets and the future of agriculture and food technology. Don’t miss the breakout sessions dedicated to accessing grant money. Explore the event agenda now and get a jump on organizing your schedule.

❌