FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Analytics as a service: Why more enterprises should consider outsourcing

By Ram Iyer
Joey Lei Contributor
Joey Lei is director of service management at Synoptek. With more than 14 years of experience in engineering and product management, Lei is responsible for the development and growth of the Synoptek service portfolio and solution development with strategic technology alliance partners.
Debbie Zelten Contributor
Debbie Zelten (SAFe(R) 4 Agilist, SAFe Scrum Master, CSM, LSSGB, PMI-ACP) is the director of application development and business intelligence at Synoptek. She has over 20 years of experience in implementing software and data analytics solutions for companies of all sizes.

With an increasing number of enterprise systems, growing teams, a rising proliferation of the web and multiple digital initiatives, companies of all sizes are creating loads of data every day. This data contains excellent business insights and immense opportunities, but it has become impossible for companies to derive actionable insights from this data consistently due to its sheer volume.

According to Verified Market Research, the analytics-as-a-service (AaaS) market is expected to grow to $101.29 billion by 2026. Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights. Through AaaS, managed services providers (MSPs) can help organizations get started on their analytics journey immediately without extravagant capital investment.

MSPs can take ownership of the company’s immediate data analytics needs, resolve ongoing challenges and integrate new data sources to manage dashboard visualizations, reporting and predictive modeling — enabling companies to make data-driven decisions every day.

AaaS could come bundled with multiple business-intelligence-related services. Primarily, the service includes (1) services for data warehouses; (2) services for visualizations and reports; and (3) services for predictive analytics, artificial intelligence (AI) and machine learning (ML). When a company partners with an MSP for analytics as a service, organizations are able to tap into business intelligence easily, instantly and at a lower cost of ownership than doing it in-house. This empowers the enterprise to focus on delivering better customer experiences, be unencumbered with decision-making and build data-driven strategies.

Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights.

In today’s world, where customers value experiences over transactions, AaaS helps businesses dig deeper into their psyche and tap insights to build long-term winning strategies. It also enables enterprises to forecast and predict business trends by looking at their data and allows employees at every level to make informed decisions.

Heirlume raises $1.38M to remove the barriers of trademark registration for small businesses

By Darrell Etherington

Platforms like Shopify, Stripe and WordPress have done a lot to make essential business-building tools, like running storefronts, accepting payments, and building websites accessible to businesses with even the most modest budgets. But some very key aspects of setting up a company remain expensive, time-consuming affairs that can be cost-prohibitive for small businesses — but that, if ignored, can result in the failure of a business before it even really gets started.

Trademark registration is one such concern, and Toronto-based startup Heirlume just raised $1.7 million CAD (~$1.38 million) to address the problem with a machine-powered trademark registration platform that turns the process into a self-serve affair that won’t break the budget. Its AI-based trademark search will flag if terms might run afoul of existing trademarks in the U.S. and Canada, even when official government trademark search tools, and even top-tier legal firms might not.

Heirlume’s core focus is on levelling the playing field for small business owners, who have typically been significantly out-matched when it comes to any trademark conflicts.

“I’m a senior level IP lawyer focused in trademarks, and had practiced in a traditional model, boutique firm of my own for over a decade serving big clients, and small clients,” explained Heirlume co-founder Julie MacDonnell in an interview. “So providing big multinationals with a lot of brand strategy, and in-house legal, and then mainly serving small business clients when they were dealing with a cease-and-desist, or an infringement issue. It’s really those clients that have my heart: It’s incredibly difficult to have a small business owner literally crying tears on the phone with you, because they just lost their brand or their business overnight. And there was nothing I could do to help because the law just simply wasn’t on their side, because they had neglected to register their trademarks to own them.”

In part, there’s a lack of awareness around what it takes to actually register and own a trademark, MacDonnell says. Many entrepreneurs just starting out seek out a domain name as a first step, for instance, and some will fork over significant sums to register these domains. What they don’t realize, however, is that this is essentially a rental, and if you don’t have the trademark to protect that domain, the actual trademark owner can potentially take it away down the road. But even if business owners do realize that a trademark should be their first stop, the barriers to actually securing one are steep.

“There was an an enormous, insurmountable barrier, when it came to brand protection for those business owners,” she said. “And it just isn’t fair. Every other business service, generally a small business owner can access. Incorporating a company or even insurance, for example, owning and buying insurance for your business is somewhat affordable and accessible. But brand ownership is not.”

Heirlume brings the cost of trademark registration down from many thousands of dollars, to just under $600 for the first, and only $200 for each additional after that. The startup is also offering a very small business-friendly ‘buy now, pay later’ option supported by Clearbanc, which means that even businesses starting on a shoestring can take step of protecting their brand at the outset.

In its early days, Heirlume is also offering its core trademark search feature for free. That provides a trademark search engine that works across both U.S. and Canadian government databases, which can not only tell you if your desired trademark is available or already held, but also reveal whether it’s likely to be able to be successfully obtained, given other conflicts that might arise that are totally ignored by native trademark database search portals.

Heirlume search tool comparison

Image Credits: Heirlume

Heirlume uses machine learning to identify these potential conflicts, which not only helps users searching for their trademarks, but also greatly decreases the workload behind the scenes, helping them lower costs and pass on the benefits of those improved margins to its clients. That’s how it can achieve better results than even hand-tailored applications from traditional firms, while doing so at scale and at reduced costs.

Another advantage of using machine-powered data processing and filing is that on the government trademark office side, the systems are looking for highly organized, curated data sets that are difficult for even trained people to get consistently right. Human error in just data entry can cause massive backlogs, MacDonnell notes, even resulting in entire applications having to be tossed and started over from scratch.

“There are all sorts of datasets for those [trademark requirement] parameters,” she said. “Essentially, we synthesize all of that, and the goal through machine learning is to make sure that applications are utterly compliant with government rules. We actually have a senior level trademark examiner that that came to work for us, very excited that we were solving the problems causing backlogs within the government. She said that if Heirlume can get to a point where the applications submitted are perfect, there will be no backlog with the government.”

Improving efficiency within the trademark registration bodies means one less point of friction for small business owners when they set out to establish their company, which means more economic activity and upside overall. MacDonnell ultimately hopes that Heirlume can help reduce friction to the point where trademark ownership is at the forefront of the business process, even before domain registration. Heirlume has a partnership with Google Domains to that end, which will eventually see indication of whether a domain name is likely to be trademarkable included in Google Domain search results.

This initial seed funding includes participation from Backbone Angels, as well as the Future Capital collective, Angels of Many and MaRS IAF, along with angel investors including Daniel Debow, Sid Lee’s Bertrand Cesvet and more. MacDonnell notes that just as their goal was to bring more access and equity to small business owners when it comes to trademark protection, the startup was also very intentional in building its team and its cap table. MacDonnell, along with co-founders CTO Sarah Guest and Dave McDonnell, aim to build the largest tech company with a majority female-identifying technology team. Its investor make-up includes 65% female-identifying or underrepresented investors, and MacDonnell says that was a very intentional choice that extended the time of the raise, and even led to turning down interest from some leading Silicon Valley firms.

“We want underrepresented founders to be to be funded, and the best way to ensure that change is to empower underrepresented investors,” she said. “I think that we all have a responsibility to actually do do something. We’re all using hashtags right now, and hashtags are not enough […] Our CTO is female, and she’s often been the only female person in the room. We’ve committed to ensuring that women in tech are no longer the only person in the room.”

Computer vision inches toward ‘common sense’ with Facebook’s latest research

By Devin Coldewey

Machine learning is capable of doing all sorts of things as long as you have the data to teach it how. That’s not always easy, and researchers are always looking for a way to add a bit of “common sense” to AI so you don’t have to show it 500 pictures of a cat before it gets it. Facebook’s newest research takes a big step toward reducing the data bottleneck.

The company’s formidable AI research division has been working for years now on how to advance and scale things like advanced computer vision algorithms, and has made steady progress, generally shared with the rest of the research community. One interesting development Facebook has pursued in particular is what’s called “semi-supervised learning.”

Generally when you think of training an AI, you think of something like the aforementioned 500 pictures of cats — images that have been selected and labeled (which can mean outlining the cat, putting a box around the cat or just saying there’s a cat in there somewhere) so that the machine learning system can put together an algorithm to automate the process of cat recognition. Naturally if you want to do dogs or horses, you need 500 dog pictures, 500 horse pictures, etc. — it scales linearly, which is a word you never want to see in tech.

Semi-supervised learning, related to “unsupervised” learning, involves figuring out important parts of a data set without any labeled data at all. It doesn’t just go wild, there’s still structure; for instance, imagine you give the system a thousand sentences to study, then showed it 10 more that have several of the words missing. The system could probably do a decent job filling in the blanks just based on what it’s seen in the previous thousand. But that’s not so easy to do with images and video — they aren’t as straightforward or predictable.

But Facebook researchers have shown that while it may not be easy, it’s possible and in fact very effective. The DINO system (which stands rather unconvincingly for “DIstillation of knowledge with NO labels”) is capable of learning to find objects of interest in videos of people, animals and objects quite well without any labeled data whatsoever.

Animation showing four videos and the AI interpretation of the objects in them.

Image Credits: Facebook

It does this by considering the video not as a sequence of images to be analyzed one by one in order, but as a complex, interrelated set, like the difference between “a series of words” and “a sentence.” By attending to the middle and the end of the video as well as the beginning, the agent can get a sense of things like “an object with this general shape goes from left to right.” That information feeds into other knowledge, like when an object on the right overlaps with the first one, the system knows they’re not the same thing, just touching in those frames. And that knowledge in turn can be applied to other situations. In other words, it develops a basic sense of visual meaning, and does so with remarkably little training on new objects.

This results in a computer vision system that’s not only effective — it performs well compared with traditionally trained systems — but more relatable and explainable. For instance, while an AI that has been trained with 500 dog pictures and 500 cat pictures will recognize both, it won’t really have any idea that they’re similar in any way. But DINO — although it couldn’t be specific — gets that they’re similar visually to one another, more so anyway than they are to cars, and that metadata and context is visible in its memory. Dogs and cats are “closer” in its sort of digital cognitive space than dogs and mountains. You can see those concepts as little blobs here — see how those of a type stick together:

Animated diagram showing how concepts in the machine learning model stay close together.

Image Credits: Facebook

This has its own benefits, of a technical sort we won’t get into here. If you’re curious, there’s more detail in the papers linked in Facebook’s blog post.

There’s also an adjacent research project, a training method called PAWS, which further reduces the need for labeled data. PAWS combines some of the ideas of semi-supervised learning with the more traditional supervised method, essentially giving the training a boost by letting it learn from both the labeled and unlabeled data.

Facebook of course needs good and fast image analysis for its many user-facing (and secret) image-related products, but these general advances to the computer vision world will no doubt be welcomed by the developer community for other purposes.

 

The health data transparency movement is birthing a new generation of startups

By Annie Siebert
Ariel Katz Contributor
Ariel Katz is the founder and CEO of H1, a global healthcare platform that helps life sciences companies, hospitals, academic medical centers and health systems connect with providers, find clinical research, locate industry experts and benchmark their organization.

In the early 2000s, Jeff Bezos gave a seminal TED Talk titled “The Electricity Metaphor for the Web’s Future.” In it, he argued that the internet will enable innovation on the same scale that electricity did.

We are at a similar inflection point in healthcare, with the recent movement toward data transparency birthing a new generation of innovation and startups.

Those who follow the space closely may have noticed that there are twin struggles taking place: a push for more transparency on provider and payer data, including anonymous patient data, and another for strict privacy protection for personal patient data. What’s the main difference?

This sector is still somewhat nascent — we are in the first wave of innovation, with much more to come.

Anonymized data is much more freely available, while personal data is being locked even tighter (as it should be) due to regulations like GDPR, CCPA and their equivalents around the world.

The former trend is enabling a host of new vendors and services that will ultimately make healthcare better and more transparent for all of us.

These new companies could not have existed five years ago. The Affordable Care Act was the first step toward making anonymized data more available. It required healthcare institutions (such as hospitals and healthcare systems) to publish data on costs and outcomes. This included the release of detailed data on providers.

Later legislation required biotech and pharma companies to disclose monies paid to research partners. And every physician in the U.S. is now required to be in the National Practitioner Identifier (NPI), a comprehensive public database of providers.

All of this allowed the creation of new types of companies that give both patients and providers more control over their data. Here are some key examples of how.

Allowing patients to access all their own health data in one place

This is a key capability of patients’ newly found access to health data. Think of how often, as a patient, providers aren’t aware of treatment or a test you’ve had elsewhere. Often you end up repeating a test because a provider doesn’t have a record of a test conducted elsewhere.

As concerns rise over forest carbon offsets, Pachama’s verified offset marketplace gets $15 million

By Jonathan Shieber

Restoring and preserving the world’s forests has long been considered one of the easiest, lowest cost, and simplest ways to reduce the amount of greenhouse gases in the atmosphere.

It’s by far the most popular method for corporations looking to take an easy first step on the long road to decarbonizing or offsetting their industrial operations. But in recent months the efficacy, validity, and reliability of a number of forest offsets have been called into question thanks to some blockbuster reporting from Bloomberg.

It’s against this uncertain backdrop that investors are coming in to shore up financing for Pachama, a company building a marketplace for forest carbon credits that it says is more transparent and verifiable thanks to its use of satellite imagery and machine learning technologies.

That pitch has brought in $15 million in new financing for the company, which co-founder and chief executive Diego Saez Gil said would be used for product development and the continued expansion of the company’s marketplace.

Launched only one year ago, Pachama has managed to land some impressive customers and backers. No less an authority on things environmental than Jeff Bezos (given how much of a negative impact Amazon operations have on the planet), gave the company a shoutout in his last letter to shareholders as Amazon’s outgoing chief executive. And the largest ecommerce company in Latin America, Mercado Libre, tapped the company to manage an $8 million offset project that’s part of a broader commitment to sustainability by the retailing giant.

Amazon’s Climate Pledge Fund is an investor in the latest round, which was led by Bill Gates’ investment firm Breakthrough Energy Ventures. Other investors included Lowercarbon Capital (the climate-focused fund from über-successful angel investor, Chris Sacca), former Über executive Ryan Graves’ Saltwater, the MCJ Collective, and new backers like Tim O’Reilly’s OATV, Ram Fhiram, Joe gebbia, Marcos Galperin, NBA All-star Manu Ginobilli, James Beshara, Fabrice Grinda, Sahil Lavignia, and Tomi Pierucci.

That’s not even the full list of the company’s backers. What’s made Pachama so successful, and given the company the ability to attract top talent from companies like Google, Facebook, SapceX, Tesla, OpenAI, Microsoft, Impossible Foods and Orbital Insights, is the combination of its climate mission applied to the well-understood forest offset market, said Saez Gil.

“Restoring nature is one of the most important solutions to climate change. Forests, oceans and other ecosystems not only sequester enormous amounts of CO2from the atmosphere, but they also provide critical habitat for biodiversity and are sources of livelihood for communities worldwide. We are building the technology stack required to be able to drive funding to the restoration and conservation of these ecosystems with integrity, transparency and efficiency” said Diego Saez Gil, Co-founder and CEO at Pachama. “We feel honored and excited to have the support of such an incredible group of investors who believe in our mission and are demonstrating their willingness to support our growth for the long term”. 

Customers outside of Latin America are also clamoring for access to Pachama’s offset marketplace. Microsoft, Shopify, and Softbank are also among the company’s paying buyers.

It’s another reason that investors like Y Combinator, Social Capital, Tobi Lutke, Serena Williams, Aglaé Ventures (LVMH’s tech investment arm), Paul Graham, AirAngels, Global Founders, ThirdKind Ventures, Sweet Capital, Xplorer Capital, Scott Belsky, Tim Schumacher, Gustaf Alstromer, Facundo Garreton, and Terrence Rohan, were able to commit to backing the company’s nearly $24 million haul since its 2020 launch. 

“Pachama is working on unlocking the full potential of nature to remove CO2 from the atmosphere,” said Carmichael Roberts from BEV, in a statement. “Their technology-based approach will have an enormous multiplier effect by using machine learning models for forest analysis to validate, monitor and measure impactful carbon neutrality initiatives. We are impressed by the progress that the team has made in a short period of time and look forward to working with them to scale their unique solution globally.” 

 

Eclipse Ventures has $500 million more to digitize old-line industries and bring them up to speed

By Connie Loizos

Two years ago, we talked with Lior Susan, the founder of now six-year-old Eclipse Ventures in Palo Alto, Ca. At the time, the outfit believed that the next big thing wasn’t another social network but instead the remaking of old-line industries through full tech stacks — including hardware, software and data — capable of bring them into the 21st century.

Fast forward, and nothing has changed, not inside of Eclipse anyway. While the world has gone through a dramatic transformation owing to the coronavirus pandemic — never has the U.S.’s crumbling infrastructure been so apparent to so many – Eclipse is backing exactly the same kinds of companies that it always has and with the same size fund. Indeed, after closing its second and third funds with $500 million, the firm quietly closed its fourth vehicle earlier this month with $500 million in capital commitments from predominately endowments.

This morning, we talked with Susan about Eclipse’s focus on revitalizing old industries that remain largely untouched by tech, and why the pitch of Lior and the rest of Eclipse’s team has never been more powerful. Excerpts from that conversation follow, edited lightly for length and clarity.

TC: Because of where Eclipse focuses, you were long aware of the coming supply chain crises that the pandemic brought to the fore. Have your priorities changed at all as an investor? Did you have a to-do list going into 2020 and has that changed?

LS: Not really. We’ve been saying from inception that the infrastructure that we are living in is 50 to 60 years old across the board. We’ve been all of this time in those social software and fintech, new ideas and consumer trends. But we don’t live in the internet, we actually live in the physical world. And the physical world is not [receiving investment] at all. But much of that innovation can be applied to the world in which we are living, and what we want to do is bring that $65 trillion backstage economy into the digital age.

TC: In this go-go market, not a lot of funds are raising the same amounts as they have previously. Why did you choose to do so?

LS: We have a very specific strategy. We only lead early-stage investments in around 22 companies per fund, we [want] 20% to 25% with our initial check, and we double down on companies that we think are breaking out and try to lead two or three rounds in a row. And we know how to run the spreadsheets and we know how to make an assumption [about] what is the enterprise value we need to create in order to deliver alpha returns, and [that math leads us to] $500 million.

TC:  The last time we’d talked, Eclipse had also helped created and funded a company, Bright Machines, which primarily develops software for robotic systems inside of manufacturing companies. Have you launched any other companies in the last couple of years? I remember you don’t like the word ‘incubate.’

LS: We call it venture equity internally, but basically, we are very thesis oriented, so a lot of our investments start with us [circling around] an investment thesis and an area that we believe is getting really interesting. I’m right now working on a thesis around insurance in the manufacturing space [that will cover] working comp, facilities, assets . . . It [always] will start with a one-page thesis and we’ll talk inside the firm about it, and we’ll go hunt. But we don’t find what we like in a lot of cases. This is where we’re like, ‘Okay, we come from operating backgrounds. Why not roll up our sleeves and figure out how we can go and build these companies?’

You’re right that we did Bright Machines. We’ve also done Bright Insight (an IoT platform for biopharma and medtech that just raised $101 million in Series C funding led by General Catalyst), Chord (a commerce-as-a-service software for direct-to-consumer brands that just raised $18 million in Series A funding), and Metrolink (a new company that helps organizations design and manage their data flows). We’ve done [this model] a [few] times where we didn’t just invest in the company but we’re part of the founding team or we’re carving out assets. We’re trying to keep it very flexible.

TC: Interesting that you couldn’t find an insurance company focused on the manufacturing industry that you like.

LS: We have a lot of theses like that. We see a lot of horizontal business models and tech that [could work well] in the verticals where we’re playing and that we know need solutions. So, can you do a Slack for construction, or can you find the right people to build a Lemonade for manufacturing, or can you find the Shopify for industrial assets or spare parts?

TC: What size checks are you writing?

LS: I’d say $3 million to $4 million initial checks and up to $20 million or $25 million in a Series B, but you will find a lot of our companies where we invested $150 million plus over the lifetime of the company.

TC: Which company has attracted the most from Eclipse?

LS: I’d guess Cerebras [Systems, which reportedly makes the world’s largest computer chip].

TC: What do you make of what we’re hearing from the new administration in the U.S. on the infrastructure front. Do you think it’s talking about pouring money into the right verticals?

LS:  I was on a call with the manufacturing task force on Monday, and I will tell you — without getting into politics at all, because that’s above my pay grade — that the current administration is going to pour hundreds of billions of dollars, if not trillions of dollars, into upgrading the infrastructure of this country. And it’s going to be semiconductors, batteries, manufacturing, industrial infrastructure as a whole . . .

[I think last year’s ventilator shortage made clear] that we’d lost 100% of the manufacturing capabilities of this country and Western countries as a whole. And I think everyone now understands that you’re going to see a massive swing of investment in infrastructure and the only way to do it is through technology, because we actually don’t have a million people here that want to [work on an assembly line].  We actually need automation lines and software and computer vision and machine learning and everything that Silicon Valley is really good at.

TC: You have insight into what’s happening on the semiconductor front through Cerebras and other bets. There’s obviously a huge chip shortage that’s impacting everyone, including the auto industry. How long will it take for supply to catch up to demand?

LS: I think we’re going to see some big changes, but it’s  going to take many, many, many years. This is not software, we cannot bring everything up [to speed overnight] as you actually need fabs and cleaning rooms and assets. It’s pretty complicated.

It’s going to get worse in the next couple of quarters. It’s good for some of our companies that are working on the problem, but overall, as an economy, it’s pretty bad news.

Kry closes $312M Series D after use of its telehealth tools grows 100% yoy

By Natasha Lomas

Swedish digital health startup Kry, which offers a telehealth service (and software tools) to connect clinicians with patients for remote consultations, last raised just before the pandemic hit in Western Europe, netting a €140M Series C in January 2020.

Today it’s announcing an oversubscribed sequel: The Series D raise clocks in at $312M (€262M) and will be used to keep stepping on the growth gas in the region.

Investors in this latest round for the 2015-founded startup are a mix of old and new backers: The Series D is led by CPP Investments (aka, the Canadian Pension Plan Investment Board) and Fidelity Management & Research LLC, with participation from existing investors including The Ontario Teachers’ Pension Plan, as well as European-based VC firms Index Ventures, Accel, Creandum and Project A.

The need for people to socially distance during the coronavirus pandemic has given obvious uplift to the telehealth category, accelerating the rate of adoption of digital health tools that enable remote consultations by both patients and clinicians. Kry quickly stepped in to offer a free service for doctors to conduct web-based consultations last year, saying at the time that it felt a huge responsibility to help.

That agility in a time of public health crisis has clearly paid off. Kry’s year-over-year growth in 2020 was 100% — meaning that the ~1.6M digital doctors appointments it had served up a year ago now exceed 3M. Some 6,000 clinicians are also now using its telehealth platform and software tools. (It doesn’t break out registered patient numbers).

Yet co-founder and CEO, Johannes Schildt, says that, in some ways, it’s been a rather quiet 12 months for healthcare demand.

Sure the pandemic has driven specific demand, related to COVID-19 — including around testing for the disease (a service Kry offers in some of its markets) — but he says national lockdowns and coronavirus concerns have also dampened some of the usual demand for healthcare. So he’s confident that the 100% growth rate Kry has seen amid the COVID-19 public health crisis is just a taster of what’s to come — as healthcare provision shifts toward more digital delivery.

“Obviously we have been on the right side of a global pandemic. And if you look back the mega trend was obviously there long before the pandemic but the pandemic has accelerated the trend and it has served us and the industry well in terms of anchoring what we do. It’s now very well anchored across the globe — that telemedicine and digital healthcare is a crucial part of the healthcare systems moving forward,” Schildt tells TechCrunch.

“Demand has been increasing during the year, most obviously, but if you look at the broader picture of healthcare delivery — in most European markets — you actually have healthcare usage at an all time low. Because a lot of people are not as sick anymore given that you have tight restrictions. So it’s this rather strange dynamic. If you look at healthcare usage in general it’s actually at an all time low. But telemedicine is on an upward trend and we are operating on higher volumes… than we did before. And that is great, and we have been hiring a lot of great clinicians and been shipping a lot of great tools for clinicians to make the shift to digital.”

The free version of Kry’s tools for clinicians generated “big uplift” for the business, per Schildt, but he’s more excited about the wider service delivery shifts that are happening as the pandemic has accelerated uptake of digital health tools.

“For me the biggest thing has been that [telemedicine is] now very well established, it’s well anchored… There is still a different level of maturity between different European markets. Even [at the time of Kry’s Series C round last year] telemedicine was maybe not something that was a given — for us it’s always been of course; for me it’s always been crystal clear that this is the way of the future; it’s a necessity, you need to shift a lot of the healthcare delivery to digital. We just need to get there.”

The shift to digital is a necessary one, Schildt argues, in order to widen access to (inevitably) limited healthcare resources vs ever growing demand (current pandemic lockdown dampeners excepted). This is why Kry’s focus has always been on solving inefficiencies in healthcare delivery.

It seeks to do that in a variety of ways — including by offering support tools for clinicians working in public healthcare systems (for example, more than 60% of all the GPs in the UK market, where most healthcare is delivered via the taxpayer-funded NHS, is using Kry’s tools, per Schildt); as well as (in a few markets) running a full healthcare service itself where it combines telemedicine with a network of physical clinics where users can go when they need to be examined in person by a clinician. It also has partnerships with private healthcare providers in Europe.

In short, Kry is agnostic about how it helps deliver healthcare. That philosophy extends to the tech side — meaning video consultations are just one component of its telemedicine business which offers remote consultations for a range of medical issues, including infections, skin conditions, stomach problems and psychological disorders. (Obviously not every issue can be treated remotely but at the primary care level there are plenty of doctor-patient visits that don’t need to take place in person.)

Kry’s product roadmap — which is getting an investment boost with this new funding — involves expanding its patient-facing app to offer more digitally delivered treatments, such as Internet Cognitive Based Therapy (ICBT) and mental health self-assessment tools. It also plans to invest in digital healthcare tools to support chronic healthcare conditions — whether by developing more digital treatments itself (either by digitizing existing, proven treatments or coming up with novel approaches), and/or expanding its capabilities via acquisitions and strategic partnerships, according to Schildt.

Over the past five+ years, a growing number of startups have been digitizing proven treatment programs, such as for disorders like insomnia and anxiety, or musculoskeletal and chronic conditions that might otherwise require accessing a physiotherapist in person. Options for partners for Kry to work with on expanding its platform are certainly plentiful — although it’s developed the ICBT programs in house so isn’t afraid to tackle the digital treatment side itself.

“Given that we are in the fourth round of this massive change and transition in healthcare it makes a lot of sense for us to continue to invest in great tools for clinicians to deliver high quality care at great efficiency and deepening the experience from the patient side so we can continue to help even more people,” says Schildt.

“A lot of what we do we do is through video and text but that’s just one part of it. Now we’re investing a lot in our mental health plans and doing ICBT treatment plans. We’re going deeper into chronic treatments. We have great tools for clinicians to deliver high quality care at scale. Both digitally and physically because our platform supports both of it. And we have put a lot of effort during this year to link together our digital healthcare delivery with our physical healthcare delivery that we sometimes run ourselves and we sometimes do in partnerships. So the video itself is just one piece of the puzzle. And for us it’s always been about making sure we saw this from the end consumer’s perspective, from the patient’s perspective.”

“I’m a patient myself and still a lot of what we do is driven by my own frustration on how inefficient the system is structured in some areas,” he adds. “You do have a lot of great clinicians out there but there’s truly a lack of patient focus and in a lot of European markets there’s a clear access problem. And that has always been our starting point — how can we make sure that we solve this in a better way for the patients? And then obviously that involves us both building strong tools and front ends for patients so they can easily access care and manage their health, be pro-active about their health. It also involves us building great tools for clinicians that they can operate and work within — and there we’re putting way more effort as well.

“A lot of clinicians are using our tools to deliver digital care — not only clinicians that we run ourselves but ones we’re partnering with. So we do a lot of it in partnerships. And then also, given that we are a European provider, it involves us partnering with both public and private payers to make sure that the end consumer can actually access care.”

Another batch of startups in the digital healthcare delivery space talk a big game about ‘democratizing’ access to healthcare with the help of AI-fuelled triage or even diagnosis chatbots — with the idea that these tools can replace at least some of the work done by human doctors. The loudest on that front is probably Babylon Health.

Kry, by contrast, has avoided flashy AI hype, even though its tools do frequently incorporate machine learning technology, per Schildt. It also doesn’t offer a diagnosis chatbot. The reason for its different emphasis comes back to the choice of problem to focus on: Inefficiencies in healthcare delivery — with Schildt arguing that decision-making by doctors isn’t anywhere near the top of the list of service pain-points in the sector.

“We’re obviously using what would be considered AI or machine learning tools in all products that we’re building. I think sometimes personally I’m a bit annoyed at companies screaming and shouting about the technology itself and less about what problem you are solving with it,” he tells us. “On the decision-support [front], we don’t have the same sort of chatbot system that some other companies do, no. It’s obviously something that we could build really effortlessly. But I think — for me — it’s always about asking yourself what is the problem that you’re solving for? For the patient. And to be honest I don’t find it very useful.

“In many cases, especially in primary care, you have two categories. You have patients that already know why they need help, because you have a urinary tract infection; you had it before. You have an eye infection. You have a rash —  you know that it’s a rash, you need to see someone, you need to get help. Or you’re worried about your symptoms and you’re not really sure what it is — and you need comfort. And I think we’re not there yet where a chatbot would give you that sort of comfort, if this is something severe or not. You still want to talk to a human being. So I think it’s of limited use.

“Then on the decision side of it — sort of making sure that clinicians are making better decisions — we are obviously doing decision support for our clinicians. But if it’s one thing clinicians are really good at it’s actually making decisions. And if you look into the inefficiencies in healthcare the decision-making process is not the inefficiency. The matching side is an inefficiency side.”

He gives the example of how much the Swedish healthcare system spends on translators (circa €200M) as a “huge inefficiency” that could be reduced simply — by smarter matching of multilingual clinicians to patients.

“Most of our doctors are bilingual but they’re not there at the same time as the patient. So on the matching side you have a lot of inefficiency — and that’s where we have spent time on, for example. How can we sort that, how can we make sure that a patient that is seeking help with us ends up with the right level of care? If that is someone that speaks your native language so you can actually understand each other. Is this something that could be fully treated by a nurse? Or should it be directly to a psychologist?”

“With all technology it’s always about how do we use technology to solve a real problem, it’s less about the technology itself,” he adds.

Another ‘inefficiency’ that can affect healthcare provision in Europe relates to a problematic incentive to try to shrink costs (and, if it’s private healthcare, maximize an insurer’s profits) by making it harder for patients to access primary medical care — whether through complicated claims processes or by offering a bare minimum of information and support to access services (or indeed limiting appointment availability), making patients do the legwork of tracking down a relevant professional for their particular complaint and obtaining a coveted slot to see them.

It’s a maddening dynamic in a sector that should be focused on making as many people as healthy as they possibly can be in order that they avoid as much disease as possible — obviously as that outcome is better for the patients themselves. But also given the costs involved in treating really sick people (medical and societal). A wide range of chronic conditions, from type 2 diabetes to lower back pain, can be particularly costly to treat and yet may be entirely preventable with the right interventions.

Schildt sees a key role for digital healthcare tools to drive a much needed shift toward the kind of preventative healthcare that would be better all round, for both patients and for healthcare costs.

“That annoys me a lot,” he says. “That’s sometimes how healthcare systems are structured because it’s just costly for them to deliver healthcare so they try to make it as hard as possible for people to access healthcare — which is an absurdity and also one of the reasons why you now have increasing costs in healthcare systems in general, it’s exactly that. Because you have a lack of access in the first point of contact, with primary care. And what happens is you do have a spillover effect to secondary care.

“We see that in the data in all European markets. You have people ending up in emergency rooms that should have been treated in primary care but they can’t access primary care because there’s no access — you don’t know how to get in there, it’s long waiting times, it’s just triaged to different levels without getting any help and you have people with urinary tract infections ending up in emergency rooms. It’s super costly… when you have healthcare systems trying to fend people off. That’s not the right way doing it. You have to — and I think we will be able to play a crucial role in that in the coming ten years — push the whole system into being more preventative and proactive and access is a key part of that.

“We want to make it very, very simple for the patients — that they should be able to reach out to us and we will direct you to the right level of care.”

With so much still to do tackling the challenges of healthcare delivery in Europe, Kry isn’t in a hurry to expand its services geographically. Its main markets are Sweden, Norway, France, Germany and the UK, where it operates a healthcare service itself (not necessarily nationwide), though it notes that it offers a video consultation service to 30 regional markets.

“Right now we are very European focused,” says Schildt, when asked whether it has any plans for a U.S. launch. “I would never say that we would never go outside of Europe but for here and now we are extremely focused on Europe, we know those markets very, very well. We know how to manoeuvre in the European systems.

“It’s a very different payer infrastructure in Europe vs the US and then it’s also so that focus is always king and Europe is the mega market. Healthcare is 10% of the GDP in all European markets, we don’t have to go outside of Europe to build a very big business. But for the time being I think it makes a lot of sense for us to stay focused.”

 

Interview: Apple executives on the 2021 iPad Pro, stunting with the M1 and creating headroom

By Matthew Panzarino

When the third minute of Apple’s first product event of 2021 ticked over and they had already made 3 announcements we knew it was going to be a packed one. In a tight single hour this week, Apple launched a ton of new product including AirTags, new Apple Card family sharing, a new Apple TV, a new set of colorful iMacs, and a purple iPhone 12 shade.

Of the new devices announced, though, Apple’s new 12.9” iPad Pro is the most interesting from a market positioning perspective. 

This week I got a chance to speak to Apple Senior Vice President of Worldwide Marketing Greg Joswiak and Senior Vice President of Hardware Engineering John Ternus about this latest version of the iPad Pro and its place in the working universe of computing professionals. 

In many ways, this new iPad Pro is the equivalent of a sprinter being 3 lengths ahead going into the last lap and just turning on the afterburners to put a undebatable distance between themselves and the rest of the pack. Last year’s model is still one of the best computers you can buy, with a densely packed offering of powerful computing tools, battery performance and portability. And this year gets upgrades in the M1 processor, RAM, storage speed, Thunderbolt connection, 5G radio, new ultra wide front camera and its Liquid Retina XDR display. 

This is a major bump even while the 2020 iPad Pro still dominates the field. And at the center of that is the display.

Apple has essentially ported its enormously good $5,000 Pro Display XDR down to a 12.9” touch version, with some slight improvements. But the specs are flat out incredible. 1,000 nit brightness peaking at 1,600 nits in HDR with 2,500 full array local dimming zones — compared to the Pro Display XDR’s 576 in a much larger scale.

Given that this year’s first product launch from Apple was virtual, the media again got no immediate hands on with the new devices introduced, including iPad Pro. This means that I have not yet seen the XDR display in action. Unfortunately, these specs are so good that estimating them without having seen the screen yet is akin to trying to visualize “a trillion” in your head. It’s intellectually possible but not really practical. 

It’s brighter than any Mac or iOS device not the market and could be a big game changing device for professionals working in HDR video and photography. But even still, this is a major investment to ship a micro-LED display in the millions or tens of millions of units with more density and brightness than any other display on the market. 

I ask both of them why there’s a need to do this doubling down on what is already one of the best portable displays ever made — if not one of the best displays period. 

“We’ve always tried to have the best display,” says Ternus. “We’re going from the best display on any device like this and making it even better, because that’s what we do and that’s why we, we love coming to work every day is to take that next big step.

“[With the] Pro Display XDR if you remember one thing we talked about was being able to have this display and this capability in more places in the work stream. Because traditionally there was just this one super expensive reference monitor at the end of the line. This is like the next extreme of that now you don’t even have to be in the studio anymore you can take it with you on the go and you can have that capability so from a, from a creative pro standpoint we think this is going to be huge.”

In my use of the Pro Display and my conversations with professionals about it one of the the common themes that I’ve heard is the reduction in overall workload due to the multiple points in the flow where color and image can be managed accurately to spec now. The general system in place puts a reference monitor very late in the production stage which can often lead to expensive and time consuming re-rendering or new color passes. Adding the Liquid Retina XDR display into the mix at an extremely low price point means that a lot more plot points on the production line suddenly get a lot closer to the right curve. 

One of the stronger answers on the ‘why the aggressive spec bump’ question comes later in our discussion but is worth mentioning in this context. The point, Joswiak says, is to offer headroom. Headroom for users and headroom for developers. 

“One of the things that iPad Pro has done as John [Ternus] has talked about is push the envelope. And by pushing the envelope that has created this space for developers to come in and fill it. When we created the very first iPad Pro, there was no Photoshop,” Joswiak notes. “There was no creative apps that could immediately use it. But now there’s so many you can’t count. Because we created that capability, we created that performance — and, by the way sold a fairly massive number of them — which is a pretty good combination for developers to then come in and say, I can take advantage of that. There’s enough customers here and there’s enough performance. I know how to use that. And that’s the same thing we do with each generation. We create more headroom to performance that developers will figure out how to use.

“The customer is in a great spot because they know they’re buying something that’s got some headroom and developers love it.”

The iPad Pro is now powered by the M1 chip — a move away from the A-series naming. And that processor part is identical (given similar memory configurations) to the one found in the iMac announced this week and MacBooks launched earlier this year.

“It’s the same part, it’s M1,” says Ternus. “iPad Pro has always had the best Apple silicon we make.”

“How crazy is it that you can take a chip that’s in a desktop, and drop it into an iPad,” says Joswiak. “I mean it’s just incredible to have that kind of performance at such amazing power efficiency. And then have all the technologies that come with it. To have the neural engine and ISP and Thunderbolt and all these amazing things that come with it, it’s just miles beyond what anybody else is doing.”

As the M1 was rolling out and I began running my testing, the power per watt aspects really became the story. That really is the big differentiator for M1. For decades, laptop users have been accustomed to saving any heavy or intense workloads for the times when their machines were plugged in due to power consumption. M1 is in the process of resetting those expectations for desktop class processors. In fact, Apple is offering not only the most powerful CPUs but also the most power-efficient CPUs on the market. And it’s doing it in a $700 Mac Mini, a $1,700 iMac and a $1,100 iPad Pro at the same time. It’s a pretty ridiculous display of stunting, but it’s also the product of more than a decade of work building its own architecture and silicon.

“Your battery life is defined by the capacity of your battery and the efficiency of your system right? So we’re always pushing really really hard on the system efficiency and obviously with M1, the team’s done a tremendous job with that. But the display as well. We designed a new mini LED for this display, focusing on efficiency and on package size, obviously, to really to be able to make sure that it could fit into the iPad experience with the iPad experience’s good battery life. 

We weren’t going to compromise on that,” says Ternus.

One of the marquee features of the new iPad Pro is its 12MP ultra-wide camera with Center Stage. An auto-centering and cropping video feature designed to make FaceTime calling more human-centric, literally. It finds humans in the frame and centers their faces, keeping them in the frame even if they move, standing and stretching or leaning to the side. It also includes additional people in the frame automatically if they enter the range of the new ultra-wide 12MP front-facing camera. And yes, it also works with other apps like Zoom and Webex and there will be an API for it.

I’ve gotten to see it in action a bit more and I can say with surety that this will become an industry standard implementation of this kind of subject focusing. The crop mechanic is handled with taste, taking on the characteristics of a smooth zoom pulled by a steady hand rather than an abrupt cut to a smaller, closer framing. It really is like watching a TV show directed by an invisible machine learning engine. 

“This is one of the examples of some of our favorite stuff to do because of the way it marries the hardware and software right,” Ternus says. “So, sure it’s the camera but it’s also the SOC and and the algorithms associated with detecting the person and panning and zooming. There’s the kind of the taste aspect right which is how do we make something that feels good it doesn’t move too fast and doesn’t move too slow. That’s a lot of talented, creative people coming together and trying to find the thing that makes it Apple like.”

It also goes a long way to making the awkward horizontal camera placement when using the iPad Pro with Magic Keyboard. This has been a big drawback for using the iPad Pro as a portable video conferencing tool, something we’ve all been doing a lot of lately. I ask Ternus whether Center Stage was designed to mitigate this placement.

“Well, you can use iPad in any orientation right? So you’re going to have different experiences based on how you’re using it. But what’s amazing about this is that we can keep correcting the frame. What’s been really cool is that we’ve all been sitting around in these meetings all day long on video conferencing and it’s just nice to get up. This experience of just being able to stand up and kind of stretch and move around the room without walking away from the camera has been just absolutely game changing, it’s really cool.”

It’s worth noting that several other video sharing devices like the Portal and some video software like Teams already offer cropping-type follow features, but the user experience is everything when you’re shipping software like this to millions of people at once. It will be interesting to see how Center Stage stacks up agains the competition when we see it live. 

With the ongoing chatter about how the iPad Pro and Mac are converging from a feature-set perspective, I ask how they would you characterize an iPad Pro vs. a MacBook buyer? Joswiak is quick to respond to this one. 

“This is my favorite question because you know, you have one camp of people who believe that the iPad and the Mac are at war with one another right it’s one or the other to the death. And then you have others who are like, no, they’re bringing them together — they’re forcing them into one single platform and there’s a grand conspiracy here,” he says.

“They are at opposite ends of a thought spectrum and reality is neither is correct, right? We pride ourselves in the fact that we work really, really, really hard to have the best products in the respective categories. The Mac is the best personal computer, it just is. Customer satisfaction would indicate that is the case, by a longshot.”

Joswiak points out that the whole PC category is growing, which he says is nice to see. But he points out that Macs are way outgrowing PCs and doing ‘quite well’. He also notes that the iPad business is still outgrowing the tablets category (while still refusing to label the iPad a tablet). 

“And it’s also the case that it’s not an ‘either or’. The majority of our Mac customers have an iPad. That’s an awesome thing. They don’t have it because they’re replacing their Mac, it’s because they use the right tool at the right time.

What’s very cool about what [Ternus] and his team have done with iPad Pro is that they’ve created something where that’s still the case for creative professionals too — the hardest to please audience. They’ve given them a tool where they can be equally at home using the Mac for their professional making money with it kind of work, and now they can pick up an iPad Pro — and they have been for multiple generations now and do things that, again, are part of how they make money, part of their creative workflow flow,” says Joswiak. “And that test is exciting. it isn’t one or the other, both of them have a role for these people.”

Since converting over to an iPad Pro as my only portable computer, I’ve been thinking a lot about the multimodal aspects of professional work. And, clearly, Apple has as well given its launch of a Pro Workflows team back in 2018. Workflows have changed massively over the last decade, and obviously the iPhone and an iPad, with their popularization of the direct manipulation paradigm, have had everything to do with that. In the current world we’re in, we’re way past ‘what is this new thing’, and we’re even way past ‘oh cool, this feels normal’ and we’re well into ‘this feels vital, it feels necessary.’ 

Contrary to some people’s beliefs, we’re never thinking about what we should not do on an iPad because we don’t want to encroach on Mac or vice versa,” says Ternus. “Our focus is, what is the best way? What is the best iPad we can make what are the best Macs we can make. Some people are going to work across both of them, some people will kind of lean towards one because it better suits their needs and that’s, that’s all good.

If you follow along, you’ll know that Apple studiously refuses to enter into the iPad vs. Mac debate — and in fact likes to place the iPad in a special place in the market that exists unchallenged. Joswiak often says that he doesn’t even like to say the word tablet.

“There’s iPads and tablets, and tablets aren’t very good. iPads are great,” Joswiak says. “We’re always pushing the boundaries with iPad Pro, and that’s what you want leaders to do. Leaders are the ones that push the boundaries leaders are the ones that take this further than has ever been taken before and the XDR display is a great example of that. Who else would you expect to do that other than us. And then once you see it, and once you use it, you won’t wonder, you’ll be glad we did.”

Image Credits: Apple

Fraud prevention platform Sift raises $50M at over $1B valuation, eyes acquisitions

By Mary Ann Azevedo

With the increase of digital transacting over the past year, cybercriminals have been having a field day.

In 2020, complaints of suspected internet crime surged by 61%, to 791,790, according to the FBI’s 2020 Internet Crime Report. Those crimes — ranging from personal and corporate data breaches to credit card fraud, phishing and identity theft — cost victims more than $4.2 billion.

For companies like Sift — which aims to predict and prevent fraud online even more quickly than cybercriminals adopt new tactics — that increase in crime also led to an increase in business.

Last year, the San Francisco-based company assessed risk on more than $250 billion in transactions, double from what it did in 2019. The company has over several hundred customers, including Twitter, Airbnb, Twilio, DoorDash, Wayfair and McDonald’s, as well a global data network of 70 billion events per month.

To meet the surge in demand, Sift said today it has raised $50 million in a funding round that values the company at over $1 billion. Insight Partners led the financing, which included participation from Union Square Ventures and Stripes.

While the company would not reveal hard revenue figures, President and CEO Marc Olesen said that business has tripled since he joined the company in June 2018. Sift was founded out of Y Combinator in 2011, and has raised a total of $157 million over its lifetime.

The company’s “Digital Trust & Safety” platform aims to help merchants not only fight all types of internet fraud and abuse, but to also “reduce friction” for legitimate customers. There’s a fine line apparently between looking out for a merchant and upsetting a customer who is legitimately trying to conduct a transaction.

Sift uses machine learning and artificial intelligence to automatically surmise whether an attempted transaction or interaction with a business online is authentic or potentially problematic.

Image Credits:

One of the things the company has discovered is that fraudsters are often not working alone.

“Fraud vectors are no longer siloed. They are highly innovative and often working in concert,” Olesen said. “We’ve uncovered a number of fraud rings.”

Olesen shared a couple of examples of how the company thwarted fraud incidents last year. One recently involved money laundering through donation sites where fraudsters tested stolen debit and credit cards through fake donation sites at guest checkout.

“By making small donations to themselves, they laundered that money and at the same tested the validity of the stolen cards so they could use it on another site with significantly higher purchases,” he said. 

In another case, the company uncovered fraudsters using Telegram, a social media site, to make services available, such as food delivery, with stolen credentials.

The data that Sift has accumulated since its inception helps the company “act as the central nervous system for fraud teams.” Sift says that its models become more intelligent with every customer that it integrates.

Insight Partners Managing Director Jeff Lieberman, who is a Sift board member, said his firm initially invested in Sift in 2016 because even at that time, it was clear that online fraud was “rapidly growing.” It was growing not just in dollar amounts, he said, but in the number of methods cybercriminals used to steal from consumers and businesses.

Sift has a novel approach to fighting fraud that combines massive data sets with machine learning, and it has a track record of proving its value for hundreds of online businesses,” he wrote via email.

When Olesen and the Sift team started the recent process of fundraising, Index actually approached them before they started talking to outside investors “because both the product and business fundamentals are so strong, and the growth opportunity is massive,” Lieberman added.

“With more businesses heavily investing in online channels, nearly every one of them needs a solution that can intelligently weed out fraud while ensuring a seamless experience for the 99% of transactions or actions that are legitimate,” he wrote. 

The company plans to use its new capital primarily to expand its product portfolio and to scale its product, engineering and sales teams.

Sift also recently tapped Eu-Gene Sung — who has worked in financial leadership roles at Integral Ad Science, BSE Global and McCann — to serve as its CFO.

As to whether or not that meant an IPO is in Sift’s future, Olesen said that Sung’s experience of taking companies through a growth phase such as what Sift is experiencing would be valuable. The company is also for the first time looking to potentially do some M&A.

“When we think about expanding our portfolio, it’s really a buy/build partner approach,” Olesen said.

To ensure inclusivity, the Biden administration must double down on AI development initiatives

By Ram Iyer
Miriam Vogel Contributor
Miriam Vogel is the president and CEO of EqualAI, a nonprofit organization focused on reducing unconscious bias in artificial intelligence.
More posts by this contributor

The National Security Commission on Artificial Intelligence (NSCAI) issued a report last month delivering an uncomfortable public message: America is not prepared to defend or compete in the AI era. It leads to two key questions that demand our immediate response: Will the U.S. continue to be a global superpower if it falls behind in AI development and deployment? And what can we do to change this trajectory?

Left unchecked, seemingly neutral artificial intelligence (AI) tools can and will perpetuate inequalities and, in effect, automate discrimination. Tech-enabled harms have already surfaced in credit decisions, health care services, and advertising.

To prevent this recurrence and growth at scale, the Biden administration must clarify current laws pertaining to AI and machine learning models — both in terms of how we will evaluate use by private actors and how we will govern AI usage within our government systems.

The administration has put a strong foot forward, from key appointments in the tech space to issuing an Executive Order on the first day in office that established an Equitable Data Working Group. This has comforted skeptics concerned both about the U.S. commitment to AI development and to ensuring equity in the digital space.

But that will be fleeting unless the administration shows strong resolve in making AI funding a reality and establishing leaders and structures necessary to safeguard its development and use.

Need for clarity on priorities

There has been a seismic shift at the federal level in AI policy and in stated commitments to equality in tech. A number of high profile appointments by the Biden administration — from Dr. Alondra Nelson as Deputy of OSTP, to Tim Wu at the NEC, to (our former senior advisor) Kurt Campbell at the NSC — signal that significant attention will be paid to inclusive AI development by experts on the inside.

The NSCAI final report includes recommendations that could prove critical to enabling better foundations for inclusive AI development, such as creating new talent pipelines through a U.S. Digital Service Academy to train current and future employees.

The report also recommends establishing a new Technology Competitiveness Council led by the Vice President. This could prove essential in ensuring that the nation’s commitment to AI leadership remains a priority at the highest levels. It makes good sense to have the administration’s leadership on AI spearheaded by VP Harris in light of her strategic partnership with the President, her tech policy savvy and her focus on civil rights.

The U.S. needs to lead by example

We know AI is powerful in its ability to create efficiencies, such as plowing through thousands of resumes to identify potentially suitable candidates. But it can also scale discrimination, such as the Amazon hiring tool that prioritized male candidates or “digital redlining” of credit based on race.

The Biden administration should issue an Executive Order (EO) to agencies inviting ideation on ways AI can improve government operations. The EO should also mandate checks on AI used by the USG to ensure it’s not spreading discriminatory outcomes unintentionally.

For instance, there must be a routine schedule in place where AI systems are evaluated to ensure embedded, harmful biases are not resulting in recommendations that are discriminatory or inconsistent with our democratic, inclusive values — and reevaluated routinely given that AI is constantly iterating and learning new patterns.

Putting a responsible AI governance system in place is particularly critical in the U.S. Government, which is required to offer due process protection when denying certain benefits. For instance, when AI is used to determine allocation of Medicaid benefits, and such benefits are modified or denied based on an algorithm, the government must be able to explain that outcome, aptly termed technological due process.

If decisions are delegated to automated systems without explainability, guidelines and human oversight, we find ourselves in the untenable situation where this basic constitutional right is being denied.

Likewise, the administration has immense power to ensure that AI safeguards by key corporate players are in place through its procurement power. Federal contract spending was expected to exceed $600 billion in fiscal 2020, even before including pandemic economic stimulus funds. The USG could effectuate tremendous impact by issuing a checklist for federal procurement of AI systems — this would ensure the government’s process is both rigorous and universally applied, including relevant civil rights considerations.

Protection from discrimination stemming from AI systems

The government holds another powerful lever to protect us from AI harms: its investigative and prosecutorial authority. An Executive Order instructing agencies to clarify applicability of current laws and regulations (e.g., ADA, Fair Housing, Fair Lending, Civil Rights Act, etc.) when determinations are reliant on AI-powered systems could result in a global reckoning. Companies operating in the U.S. would have unquestionable motivation to check their AI systems for harms against protected classes.

Low-income individuals are disproportionately vulnerable to many of the negative effects of AI. This is especially apparent with regard to credit and loan creation, because they are less likely to have access to traditional financial products or the ability to obtain high scores based on traditional frameworks. This then becomes the data used to create AI systems that automate such decisions.

The Consumer Finance Protection Bureau (CFPB) can play a pivotal role in holding financial institutions accountable for discriminatory lending processes that result from reliance on discriminatory AI systems. The mandate of an EO would be a forcing function for statements on how AI-enabled systems will be evaluated, putting companies on notice and better protecting the public with clear expectations on AI use.

There is a clear path to liability when an individual acts in a discriminatory way and a due process violation when a public benefit is denied arbitrarily, without explanation. Theoretically, these liabilities and rights would transfer with ease when an AI system is involved, but a review of agency action and legal precedent (or rather, the lack thereof) indicates otherwise.

The administration is off to a good start, such as rolling back a proposed HUD rule that would have made legal challenges against discriminatory AI essentially unattainable. Next, federal agencies with investigative or prosecutorial authority should clarify which AI practices would fall under their review and current laws would be applicable — for instance, HUD for illegal housing discrimination; CFPB on AI used in credit lending; and the Department of Labor on AI used in determinations made in hiring, evaluations and terminations.

Such action would have the added benefit of establishing a useful precedent for plaintiff actions in complaints.

The Biden administration has taken encouraging first steps signaling its intent to ensure inclusive, less discriminatory AI. However, it must put its own house in order by directing that federal agencies require the development, acquisition and use of AI — internally and by those it does business with — is done in a manner that protects privacy, civil rights, civil liberties, and American values.

Facebook launches a series tests to inform future changes to its News Feed algorithms

By Sarah Perez

Facebook may be reconfiguring its News Feed algorithms. After being grilled by lawmakers about the role that Facebook played in the attack on the U.S. Capitol, the company announced this morning it will be rolling out a series of News Feed ranking tests that will ask users to provide feedback about the posts they’re seeing, which will later be incorporated into Facebook’s News Feed ranking process. Specifically, Facebook will be looking to learn which content people find inspirational, what content they want to see less of (like politics), and what other topics they’re generally interested in, among other things.

This will be done through a series of global tests, one of which will involve a survey directly beneath the post itself which asks, “how much were you inspired by this post?,” with the goal of helping to show more people posts of an inspirational nature closer at the top of the News Feed.

Image Credits: Facebook

Another test will work to the Facebook News Feed experience to reflect what people want to see. Today, Facebook prioritizes showing you content from friends, Groups and Pages you’ve chosen to follow, but it has algorithmically crafted an experience of whose posts to show you and when based on a variety of signals. This includes both implicit and explicit signals — like how much you engage with that person’s content (or Page or Group) on a regular basis, as well as whether you’ve added them as a “Close Friend” or “Favorite” indicating you want to see more of their content than others, for example.

However, just because you’re close to someone in real life, that doesn’t mean that you like what they post to Facebook. This has driven families and friends apart in recent years, as people discovered by way of social media how people they thought they knew really viewed the world. It’s been a painful reckoning for some. Facebook hasn’t managed to fix the problem, either. Today, users still scroll News Feeds that reinforce their views, no matter how problematic. And with the growing tide of misinformation, the News Feed has gone from just placing users into a filter bubble to presenting a full alternate reality for some, often populated by conspiracies theories.

Facebook’s third test doesn’t necessarily tackle this problem head-on, but instead looks to gain feedback about what users want to see, as a whole. Facebook says that it will begin asking people whether they want to see more or fewer posts on certain topics, like Cooking, Sports, or Politics, and more. Based on users’ collective feedback, Facebook will adjust its algorithms to show more content people say they’re interested in, and fewer posts about topics they don’t want to see.

The area of politics, specifically, has been an issue for Facebook. The social network for years has been charged with helping to fan the flames of political discourse, polarizing and radicalizing users through its algorithms, distributing misinformation at scale, and encouraging an ecosystem of divisive clickbait, as publishers sought engagement instead of fairness and balance when reporting the news. There are now entirely biased and subjective outlets posing as news sources who benefit from algorithms like Facebook’s, in fact.

Shortly after the Capitol attack, Facebook announced it would try clamping down on political content in the News Feed for a small percentage of people in the U.S., Canada, Brazil and Indonesia, for period of time during tests.

Now, the company says it will work to better understand what content is being linked negative News Feed experiences, including political content. In this case, Facebook may ask users on posts with a lot of negative reactions what sort of content they want to see less of. This will be done through surveys on certain posts as well as through ongoing research sessions where people are invited to talk about their News Feed experience, Facebook told TechCrunch.

It will also more prominently feature the option to hide posts you find “irrelevant, problematic or irritating.” Although this feature existed before, you’ll now be able to tap an X in the upper-right corner of a post to hide it from the News Feed, if in the test group, and see fewer like in the future, for a more personalized experience.

It’s not clear that allowing users to pick and choose their topics is the best way to solve the larger problems with negative posts, divisive content or misinformation, though this test is less about the latter and more about making the News Feed “feel” more positive.

As the data is collected from the tests, Facebook will incorporate the learnings into its News Feed ranking algorithms. But it’s not clear to what extent it will be adjusting the algorithm on a global basis versus simply customizing the experience for end users on a more individual basis over time. The company tells TechCrunch the survey data will be collected from a small percentage of users who are placed into the test groups, which will then be used to train a machine learning model.

It will also be exploring ways to give people more direct controls over what sort of content they see on the News Feed in the future.

The company says the tests will run over the next few months.

Medchart raises $17M to help businesses more easily access patient-authorized health data

By Darrell Etherington

Electronic health records (EHR) have long held promise as a means of unlocking new superpowers for caregiving and patients in the medical industry, but while they’ve been a thing for a long time, actually accessing and using them hasn’t been as quick to become a reality. That’s where Medchart comes in, providing access to health information between businesses, complete with informed patient consent, for using said data at scale. The startup just raised $17 million across Series A and seed rounds, led by Crosslink Capital and Golden Ventures, and including funding from Stanford Law School, rapper Nas and others.

Medchart originally started out as more of a DTC play for healthcare data, providing access and portability to digital health information directly to patients. It sprung from the personal experience of co-founders James Bateman and Derrick Chow, who both faced personal challenges accessing and transferring health record information for relatives and loved ones during crucial healthcare crisis moments. Bateman, Medchart’s CEO, explained that their experience early on revealed that what was actually needed for the model to scale and work effectively was more of a B2B approach, with informed patient consent as the crucial component.

“We’re really focused on that patient consent and authorization component of letting you allow your data to be used and shared for various purposes,” Bateman said in an interview. “And then building that platform that lets you take that data and then put it to use for those businesses and services, that we’re classifying as ‘beyond care.’ Whether those are our core areas, which would be with your, your lawyer, or with an insurance provider, or clinical researcher — or beyond that, looking at a future vision of this really being a platform to power innovation, and all sorts of different apps and services that you could imagine that are typically outside that realm of direct care and treatment.”

Bateman explained that one of the main challenges in making patient health data actually work for these businesses that surround, but aren’t necessarily a core part of a care paradigm, is delivering data in a way that it’s actually useful to the receiving party. Traditionally, this has required a lot of painstaking manual work, like paralegals poring over paper documents to find information that isn’t necessarily consistently formatted or located.

“One of the things that we’ve been really focused on is understanding those business processes,” Bateman said. “That way, when we work with these businesses that are using this data — all permissioned by the patient — that we’re delivering what we call ‘the information,’ and not just the data. So what are the business decision points that you’re trying to make with this data?”

To accomplish this, Medchart makes use of AI and machine learning to create a deeper understanding of the data set in order to be able to intelligently answer the specific questions that data requesters have of the information. Therein lies their longterm value, since once that understanding is established, they can query the data much more easily to answer different questions depending on different business needs, without needing to re-parse the data every single time.

“Where we’re building these systems of intelligence on top of aggregate data, they are fully transferable to making decisions around policies for, for example, life insurance underwriting, or with pharmaceutical companies on real world evidence for their phase three, phase four clinical trials, and helping those teams to understand, you know, the the overall indicators and the preexisting conditions and what the outcomes are of the drugs under development or whatever they’re measuring in their study,” Bateman said.”

According to Ameet Shah, Partner at co-lead investor for the Series A Golden Ventures, this is the key ingredient in what Medchart is offering that makes the company’s offering so attractive in terms of long-term potential.

“What you want is you both depth and breadth, and you need predictability — you need to know that you’re actually getting like the full data set back,” Shah said in an interview. “There’s all these point solutions, depending on the type of clinic you’re looking at, and the type of record you’re accessing, and that’s not helpful to the requester. Right now, you’re putting the burden on them, and when we looked at it, we were just like ‘Oh, this is just a whole bunch of undifferentiated heavy lifting that the entire health tech ecosystem is trying to like solve for. So if [Medchart] can just commoditize that and drive the cost down as low as possible, you can unlock all these other new use cases that never could have been done before.”

One recent development that positions Medchart to facilitate even more novel use cases of patient data is the 21st Century Cures Act, which just went into effect on April 5, provides patients with immediate access, without charge, to all the health information in their electronic medical records. That sets up a huge potential opportunity in terms of portability, with informed consent, of patient data, and Bateman suggests it will greatly speed up innovation built upon the type of information access Medchart enables.

“I think there’s just going to be an absolute explosion in this space over the next two to three years,” Bateman said. “And at Medchart, we’ve already built all the infrastructure with connections to these large information systems. We’re already plugged in and providing the data and the value to the end users and the customers, and I think now you’re going to see this acceleration and adoption and growth in this area that we’re super well-positioned to be able to deliver on.”

Data scientists: Bring the narrative to the forefront

By Ram Iyer
Peter Wang Contributor
Peter Wang is CEO and co-founder of data science platform Anaconda. He’s also a co-creator of the PyData community and conferences, and a member of the board at the Center for Humane Technology.

By 2025, 463 exabytes of data will be created each day, according to some estimates. (For perspective, one exabyte of storage could hold 50,000 years of DVD-quality video.) It’s now easier than ever to translate physical and digital actions into data, and businesses of all types have raced to amass as much data as possible in order to gain a competitive edge.

However, in our collective infatuation with data (and obtaining more of it), what’s often overlooked is the role that storytelling plays in extracting real value from data.

The reality is that data by itself is insufficient to really influence human behavior. Whether the goal is to improve a business’ bottom line or convince people to stay home amid a pandemic, it’s the narrative that compels action, rather than the numbers alone. As more data is collected and analyzed, communication and storytelling will become even more integral in the data science discipline because of their role in separating the signal from the noise.

Yet this can be an area where data scientists struggle. In Anaconda’s 2020 State of Data Science survey of more than 2,300 data scientists, nearly a quarter of respondents said that their data science or machine learning (ML) teams lacked communication skills. This may be one reason why roughly 40% of respondents said they were able to effectively demonstrate business impact “only sometimes” or “almost never.”

The best data practitioners must be as skilled in storytelling as they are in coding and deploying models — and yes, this extends beyond creating visualizations to accompany reports. Here are some recommendations for how data scientists can situate their results within larger contextual narratives.

Make the abstract more tangible

Ever-growing datasets help machine learning models better understand the scope of a problem space, but more data does not necessarily help with human comprehension. Even for the most left-brain of thinkers, it’s not in our nature to understand large abstract numbers or things like marginal improvements in accuracy. This is why it’s important to include points of reference in your storytelling that make data tangible.

For example, throughout the pandemic, we’ve been bombarded with countless statistics around case counts, death rates, positivity rates, and more. While all of this data is important, tools like interactive maps and conversations around reproduction numbers are more effective than massive data dumps in terms of providing context, conveying risk, and, consequently, helping change behaviors as needed. In working with numbers, data practitioners have a responsibility to provide the necessary structure so that the data can be understood by the intended audience.

Tecton teams with founder of Feast open source machine learning feature store

By Ron Miller

Tecton, the company that pioneered the notion of the machine learning feature store, has teamed up with the founder of the open source feature store project called Feast. Today the company announced the release of version 0.10 of the open source tool.

The feature store is a concept that the Tecton founders came up with when they were engineers at Uber. Shortly thereafter an engineer named Willem Pienaar read the founder’s Uber blog posts on building a feature store and went to work building Feast as an open source version of the concept.

“The idea of Tecton [involved bringing] feature stores to the industry, so we build basically the best in class, enterprise feature store. […] Feast is something that Willem created, which I think was inspired by some of the early designs that we published at Uber. And he built Feast and it evolved as kind of like the standard for open source feature stores, and it’s now part of the Linux Foundation,” Tecton co-founder and CEO Mike Del Balso explained.

Tecton later hired Pienaar, who is today an engineer at the company where he leads their open source team. While the company did not originally start off with a plan to build an open source product, the two products are closely aligned, and it made sense to bring Pienaar on board.

“The products are very similar in a lot of ways. So I think there’s a similarity there that makes this somewhat symbiotic, and there is no explicit convergence necessary. The Tecton product is a superset of what Feast has. So it’s an enterprise version with a lot more advanced functionality, but at Feast we have a battle-tested feature store that’s open source,” Pienaar said.

As we wrote in a December 2020 story on the company’s $35 million Series B, it describes a feature store as “an end-to-end machine learning management system that includes the pipelines to transform the data into what are called feature values, then it stores and manages all of that feature data and finally it serves a consistent set of data.”

Del Balso says that from a business perspective, contributing to the open source feature store exposes his company to a different group of users, and the commercial and open source products can feed off one another as they build the two products.

“What we really like, and what we feel is very powerful here, is that we’re deeply in the Feast community and get to learn from all of the interesting use cases […] to improve the Tecton product. And similarly, we can use the feedback that we’re hearing from our enterprise customers to improve the open source project. That’s the kind of cross learning, and ideally that feedback loop involved there,” he said.

The plan is for Tecton to continue being a primary contributor with a team inside Tecton dedicated to working on Feast. Today, the company is releasing version 0.10 of the project.

Bigeye (formerly Toro) scores $17M Series A to automate data quality monitoring

By Ron Miller

As companies create machine learning models, the operations team needs to ensure the data used for the model is of sufficient quality, a process that can be time consuming. Bigeye (formerly Toro), an early stage startup is helping by automating data quality.

Today the company announced a $17 million Series A led Sequoia Capital with participation from existing investor Costanoa Ventures. That brings the total raised to $21 million with the $4 million seed, the startup raised last May.

When we spoke to Bigeye CEO and co-founder Kyle Kirwan last May, he said the seed round was going to be focussed on hiring a team — they are 11 now — and building more automation into the product, and he says they have achieved that goal.

“The product can now automatically tell users what data quality metrics they should collect from their data, so they can point us at a table in Snowflake or Amazon Redshift or whatever and we can analyze that table and recommend the metrics that they should collect from it to monitor the data quality — and we also automated the alerting,” Kirwan explained.

He says that the company is focusing on data operations issues when it comes to inputs to the model such as the table isn’t updating when it’s supposed to, it’s missing rows or there are duplicate entries. They can automate alerts to those kinds of issues and speed up the process of getting model data ready for training and production.

Bogomil Balkansky, the partner at Sequoia who is leading today’s investment sees the company attacking an important part of the machine learning pipeline. “Having spearheaded the data quality team at Uber, Kyle and Egor have a clear vision to provide always-on insight into the quality of data to all businesses,” Balkansky said in a statement.

As the founding team begins building the company, Kirwan says that building a diverse team is a key goal for them and something they are keenly aware of.

“It’s easy to hire a lot of other people that fit a certain mold, and we want to be really careful that we’re doing the extra work to [understand that just because] it’s easy to source people within our network, we need to push and make sure that we’re hiring a team that has different backgrounds and different viewpoints and different types of people on it because that’s how we’re going to build the strongest team,” he said.

Bigeye offers on prem and SaaS solutions, and while it’s working with paying customers like Instacart, Crux Informatics, and Lambda School, the product won’t be generally available until later in the year.

❌