FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

YouTravel.Me packs up $1M to match travelers with curated small group adventures

By Christine Hall

YouTravel.Me is the latest startup to grab some venture capital dollars as the travel industry gets back on its feet amid the global pandemic.

Over the past month, we’ve seen companies like Thatch raise $3 million for its platform aimed at travel creators, travel tech company Hopper bring in $175 million, Wheel the World grab $2 million for its disability-friendly vacation planner, Elude raise $2.1 million to bring spontaneous travel back to a hard-hit industry and Wanderlog bag $1.5 million for its free travel itinerary platform.

Today YouTravel.Me joins them after raising $1 million to continue developing its online platform designed for matching like-minded travelers to small-group adventures organized by travel experts. Starta VC led the round and was joined by Liqvest.com, Mission Gate and a group of individual investors like Bas Godska, general partner at Acrobator Ventures.

Olga Bortnikova, her husband Ivan Bortnikov and Evan Mikheev founded the company in Europe three years ago. The idea for the company came to Bortnikova and Bortnikov when a trip to China went awry after a tour operator sold them a package where excursions turned out to be trips to souvenir shops. One delayed flight and other mishaps along the way, and the pair went looking for better travel experiences and a way to share them with others. When they couldn’t find what they were looking for, they decided to create it themselves.

“It’s hard for adults to make friends, but when you are on a two-week trip with just 15 people in a group, you form a deep connection, share the same language and experiences,” Bortnikova told TechCrunch. “That’s our secret sauce — we want to make a connection.”

Much like a dating app, the YouTravel.Me’s algorithms connect travelers to trips and getaways based on their interests, values and past experiences. Matched individuals can connect with each via chat or voice, work with a travel expert and complete their reservations. They also have a BeGuide offering for travel experts to do research and create itineraries.

Since 2018, CEO Bortnikova said that YouTravel.Me has become the top travel marketplace in Eastern Europe, amassing over 15,900 tours in 130 countries and attracting over 10,000 travelers and 4,200 travel experts to the platform. It was starting to branch out to international sales in 2020 when the global pandemic hit.

“Sales and tourism crashed down, and we didn’t know what to do,” she said. “We found that we have more than 4,000 travel experts on our site and they feel lonely because the pandemic was a test of the industry. We understood that and built a community and educational product for them on how to build and scale their business.”

After a McKinsey study showed that adventure travel was recovering faster than other sectors of the industry, the founders decided to go after that market, becoming part of 500 Startups at the end of 2020. As a result, YouTravel.Me doubled its revenue while still a bootstrapped company, but wanted to enter the North American market.

The new funding will be deployed into marketing in the U.S., hiring and attracting more travel experts, technology and product development and increasing gross merchandise value to $2.7 million per month by the end of 2021, Bortnikov said. The goal is to grow the number of trips to 20,000 and its travel experts to 6,000 by the beginning of next year.

Godska, also an angel investor, learned about YouTravel.Me from a mutual friend. It happened that it was the same time that he was vacationing in Sri Lanka where he was one of very few tourists. Godska was previously involved in online travel before as part of Orbitz in Europe and in Russia selling tour packages before setting up a venture capital fund.

“I was sitting there in the jungle with a bad internet connection, and it sparked my interest,” he said. “When I spoke with them, I felt the innovation and this bright vibe of how they are doing this. It instantly attracted me to help support them. The whole curated thing is a very interesting move. Independent travelers that want to travel in groups are not touched much by the traditional sector.”

 

The Poop About Your Gut Health and Personalized Nutrition

By Debby Waldman
Researchers are coming around to the idea that there isn't a one-size-fits-all diet. Some companies are going further to find out what fits you, specifically.

GM says it will seek reimbursement from LG Chem for $1B Chevy Bolt recall losses

By Rebecca Bellan

American automaker General Motors expanded its recall of Chevrolet Bolt electric vehicles on Friday due to fire risks from battery manufacturing defects. The automaker said it would seek reimbursement from LG Chem, its battery cell manufacturing partner, for what it expects to be $1 billion worth of losses.

Following the news of the recall, the third one GM has issued for this vehicle, LG Chem shares fell by 11% on Monday, and its stock price lost $6 billion in market value. GM’s shares were down 1.27% at market close.

This isn’t the first time LG Chem’s batteries have resulted in a recall from automakers. Earlier this year, Hyundai recalled 82,000 EVs due to a similar battery fire risk at an estimated cost of about $851.9 million. Hyundai’s joint battery venture was with LG Energy Solution, the specific battery unit of LG Chem, which is preparing for its initial public offering in September, but experts say the IPO could be delayed due to the recall cost.

GM’s investigation into the problems with its batteries found battery cell defects like a torn anode tab and folded separator. The recall comes a week after a fire involving a Volkswagen AG ID.3 EV with an LG Energy Solution battery. Earlier this year Volkswagen, as well as Tesla, began making moves to shift from LG Chem’s brand of pouch-type lithium-ion battery cells and towards more prismatic-type cells, like those made by CATL and Samsung SDI.

The recall leaves GM without any fully electric vehicles for sale in North America, which means it can’t compete with Tesla and other automakers as EV sales are on the rise. The loss in sales, the safety risks and the possibility of better tech on the horizon might cause GM to take its business elsewhere.

For now, there’s still work to be done together. GM said it will replace defective battery modules with new modules in the Chevy Bolt EVs and EUVs, which it says accounts for the $1 billion in losses. This is on top of the $800 million GM already is spending for the original Bolt recall last November. Battery packs are the most expensive components of the electric vehicle, on average costing about $186 per kWh, according to data from energy storage research firm Cairn ERA. GM pays about $169 per kWh, and the Bolt has a 66 kWh battery pack.

LG Chem and GM did not respond to requests for comment, so it’s not clear whether the two plan to move forward on plans announced in April to build a second U.S. battery cell factory in Tennessee. The joint venture, dubbed Ultium Cells, would aim to produce more than 70 GWh of energy.

Announcing the agenda for TechCrunch Sessions: SaaS

By Richard Smith

TechCrunch Sessions is back!

On October 27, we’re taking on the ferociously competitive field of software as a service (SaaS), and we’re thrilled to announce our packed agenda, overflowing with some of the biggest names and most exciting startups in the industry. And you’re in luck, because $75 early-bird tickets are still on sale — make sure you book yours so you can enjoy all the agenda has to offer and save $100 bucks before prices go up!

Throughout the day, you can expect to hear from industry experts, and take part in discussions about the potential of new advances in data, open source, how to deal with the onslaught of security threats, investing in early-stage startups and plenty more.

We’ll be joined by some of the biggest names and the smartest and most prescient people in the industry, including Javier Soltero at Google, Kathy Baxter at Salesforce, Jared Spataro at Microsoft, Jay Kreps at Confluent, Sarah Guo at Greylock and Daniel Dines at UiPath.

You’ll be able to find and engage with people from all around the world through world-class networking on our virtual platform — all for $75 and under for a limited time with even deeper discounts for nonprofits and government agencies, students and up-and-coming founders!

Our agenda showcases some of the powerhouses in the space, but also plenty of smaller teams that are building and debunking fundamental technologies in the industry. We still have a few tricks up our sleeves and will be adding some new names to the agenda over the next month, so keep your eyes open.

In the meantime, check out these agenda highlights:

Survival of the Fittest: Investing in Today’s SaaS Market
with Casey Aylward (Costanoa Ventures), Kobie Fuller (Upfront) and Sarah Guo (Greylock)

  • The venture capital world is faster, and more competitive than ever. For investors hoping to get into the hottest SaaS deal, things are even crazier. With more non-traditional money pouring into the sector, remote dealmaking now the norm, and an increasingly global market for software startups, venture capitalists are being forced to shake up their own operations, and expectations. TechCrunch sits down with three leading investors to discuss how they are fighting for allocation in hot deals, what they’ve changed in their own processes, and what today’s best founders are demanding.

Data, Data Everywhere
with Ali Ghodsi (Databricks)

  • As companies struggle to manage and share increasingly large amounts of data, it’s no wonder that Databricks, whose primary product is a data lake, was valued at a whopping $28 billion for its most recent funding round. We’re going to talk to CEO Ali Ghodsi about why his startup is so hot and what comes next.

Keeping Your SaaS Secure
with Edna Conway (Microsoft), Olivia Rose (Amplitude)

  • Enterprises face a litany of threats from both inside and outside the firewall. Now more than ever, companies — especially startups — have to put security first. From preventing data from leaking to keeping bad actors out of your network, startups and major corporations have it tough. How can you secure your company without slowing growth? We’ll discuss the role of a modern Chief Security Officer and how to move fast… without breaking things.

Automation’s Moment Is Now
with Daniel Dines (UiPath), Laela Sturdy (CapitalG), and Dave Wright (ServiceNow)

  • One thing we learned during the pandemic is the importance of automation, and that’s only likely to be more pronounced as we move forward. We’ll be talking to UiPath CEO Daniel Dines, Laela Sturdy, an investor at CapitalG and Dave Wright from ServiceNow about why this is automation’s moment.

Was the Pandemic Cloud Productivity’s Spark
with Javier Soltero (Google)

  • One big aspect of SaaS is productivity apps like Gmail, Google Calendar and Google Drive. We’ll talk with executive Javier Soltero about the role Google Workspace plays in the Google cloud strategy.

The Future is Wide Open
with Abby Kearns (Puppet), Aghi Marietti (Kong), and Jason Warner (Redpoint)

  • Many startups today have an open source component, and it’s no wonder. It builds an audience and helps drive sales. We’ll talk with Abby Kearns from Puppet, Augusto “Aghi” Marietti from Kong and Jason Warner an investor at Redpoint about why open source is such a popular way to build a business.

How Microsoft Shifted from on Prem to the Cloud
with Jared Spataro (Microsoft)

  • Jared Spataro has been with Microsoft for over 15 years and he was a part of the shift from strictly on prem software to one that is dominated by the cloud. Today he runs one of the most successful SaaS products out there, and we’ll talk to him about how Microsoft made that shift and what it’s meant to the company.

How Startups are Turning Data into Software Gold
with Jenn Knight (Agentsync), Barr Moses (Monte Carlo), and Dan Wright (DataRobot)

  • The era of big data is behind us. Today’s leading SaaS startups are working with data, instead of merely fighting to help customers collect information. We’ve collected three leaders from three data-focused startups that are forging new markets to get their insight on how today’s SaaS companies are leveraging data to build new companies, attack new problems, and, of course, scale like mad.

What Happens After Your Startup is Acquired
with Jyoti Bansal (Harness), Nick Mehta (GainSight)

  • We’ll speak to three founders about the emotional upheaval of being acquired and what happens after the check clears and the sale closes. Our panel includes Jyoti Bansal who founded AppDynamics, Jewel Burkes Solomon, who founded Partpic and Nick Mehta from GainSight.

How Confluent Rode the Open Source Wave to IPO
with Jay Kreps (Confluent)

  • Confluent, the streaming platform built on top of Apache Kafka, was born out of a project at LinkedIn and rode that from startup to IPO. We’ll speak to co-founder and CEO Jay Kreps to learn about what that journey was like.

We’ll have more sessions and names shortly, so stay tuned. But get excited in the meantime, we certainly are.

Pro tip: Keep your finger on the pulse of TC Sessions: SaaS. Get updates when we announce new speakers, add events and offer ticket discounts.

Why should you carve a day out of your hectic schedule to attend TC Sessions: SaaS? This may be the first year we’ve focused on SaaS, but this ain’t our first rodeo. Here’s what other attendees have to say about their TC Sessions experience.

“TC Sessions: Mobility offers several big benefits. First, networking opportunities that result in concrete partnerships. Second, the chance to learn the latest trends and how mhttps://techcrunch.com/2021/06/24/databricks-co-founder-and-ceo-ali-ghodsi-is-coming-to-tc-sessions-saas/obility will evolve. Third, the opportunity for unknown startups to connect with other mobility companies and build brand awareness.” — Karin Maake, senior director of communications at FlashParking.

“People want to be around what’s interesting and learn what trends and issues they need to pay attention to. Even large companies like GM and Ford were there, because they’re starting to see the trend move toward mobility. They want to learn from the experts, and TC Sessions: Mobility has all the experts.” — Melika Jahangiri, vice president at Wunder Mobility.

TC Sessions: SaaS 2021 takes place on October 27. Grab your team, join your community and create opportunity. Don’t wait — jump on the early bird ticket sale right now.

A mathematician walks into a bar (of disinformation)

By Danny Crichton

Disinformation, misinformation, infotainment, algowars — if the debates over the future of media the past few decades have meant anything, they’ve at least left a pungent imprint on the English language. There’s been a lot of invective and fear over what social media is doing to us, from our individual psychologies and neurologies to wider concerns about the strength of democratic societies. As Joseph Bernstein put it recently, the shift from “wisdom of the crowds” to “disinformation” has indeed been an abrupt one.

What is disinformation? Does it exist, and if so, where is it and how do we know we are looking at it? Should we care about what the algorithms of our favorite platforms show us as they strive to squeeze the prune of our attention? It’s just those sorts of intricate mathematical and social science questions that got Noah Giansiracusa interested in the subject.

Giansiracusa, a professor at Bentley University in Boston, is trained in mathematics (focusing his research in areas like algebraic geometry) but he’s also had a penchant of looking at social topics through a mathematical lens, such as connecting computational geometry to the Supreme Court. Most recently, he’s published a book called How Algorithms Create and Prevent Fake News to explore some of the challenging questions around the media landscape today and how technology is exacerbating and ameliorating those trends.

I hosted Giansiracusa on a Twitter Space recently, and since Twitter hasn’t made it easy to listen to these talks afterwards (ephemerality!), I figured I’d pull out the most interesting bits of our conversation for you and posterity.

This interview has been edited and condensed for clarity.

Danny Crichton: How did you decide to research fake news and write this book?

Noah Giansiracusa: One thing I noticed is there’s a lot of really interesting sociological, political science discussion of fake news and these types of things. And then on the technical side, you’ll have things like Mark Zuckerberg saying AI is going to fix all these problems. It just seemed like, it’s a little bit difficult to bridge that gap.

Everyone’s probably heard this recent quote of Biden saying, “they’re killing people,”in regards to misinformation on social media. So we have politicians speaking about these things where it’s hard for them to really grasp the algorithmic side. Then we have computer science people that are really deep in the details. So I’m kind of sitting in between, I’m not a real hardcore computer science person. So I think it’s a little easier for me to just step back and get the bird’s eye view.

At the end of the day, I just felt I kind of wanted to explore some more interactions with society where things get messy, where the math is not so clean.

Crichton: Coming from a mathematical background, you’re entering this contentious area where a lot of people have written from a lot of different angles. What are people getting right in this area and what have people perhaps missed some nuance?

Giansiracusa: There’s a lot of incredible journalism, I was blown away at how a lot of journalists really were able to deal with pretty technical stuff. But I would say one thing that maybe they didn’t get wrong, but kind of struck me was, there’s a lot of times when an academic paper comes out, or even an announcement from Google or Facebook or one of these tech companies, and they’ll kind of mention something, and the journalist will maybe extract a quote, and try to describe it, but they seem a little bit afraid to really try to look and understand it. And I don’t think it’s that they weren’t able to, it really seems like more of an intimidation and a fear.

One thing I’ve experienced a ton as a math teacher is people are so afraid of saying something wrong and making a mistake. And this goes for journalists who have to write about technical things, they don’t want to say something wrong. So it’s easier to just quote a press release from Facebook or quote an expert.

One thing that’s so fun and beautiful about pure math, is you don’t really worry about being wrong, you just try ideas and see where they lead and you see all these interactions. When you’re ready to write a paper or give a talk, you check the details. But most of math is this creative process where you’re exploring, and you’re just seeing how ideas interact. My training as a mathematician you think would make me apprehensive about making mistakes and to be very precise, but it kind of had the opposite effect.

Second, a lot of these algorithmic things, they’re not as complicated as they seem. I’m not sitting there implementing them, I’m sure to program them is hard. But just the big picture, all these algorithms nowadays, so much of these things are based on deep learning. So you have some neural net, doesn’t really matter to me as an outsider what architecture they’re using, all that really matters is, what are the predictors? Basically, what are the variables that you feed this machine learning algorithm? And what is it trying to output? Those are things that anyone can understand.

Crichton: One of the big challenges I think of analyzing these algorithms is the lack of transparency. Unlike, say, the pure math world which is a community of scholars working to solve problems, many of these companies can actually be quite adversarial about supplying data and analysis to the wider community.

Giansiracusa: It does seem there’s a limit to what anyone can deduce just by kind of being from the outside.

So a good example is with YouTube, teams of academics wanted to explore whether the YouTube recommendation algorithm sends people down these conspiracy theory rabbit holes of extremism. The challenge is that because this is the recommendation algorithm, it’s using deep learning, it’s based on hundreds and hundreds of predictors based on your search history, your demographics, the other videos you’ve watched and for how long — all these things. It’s so customized to you and your experience, that all the studies I was able to find use incognito mode.

So they’re basically a user who has no search history, no information and they’ll go to a video and then click the first recommended video then the next one. And let’s see where the algorithm takes people. That’s such a different experience than an actual human user with a history. And this has been really difficult. I don’t think anyone has figured out a good way to algorithmically explore the YouTube algorithm from the outside.

Honestly, the only way I think you could do it is just kind of like an old school study where you recruit a whole bunch of volunteers and sort of put a tracker on their computer and say, “Hey, just live life the way you normally do with your histories and everything and tell us the videos that you’re watching.” So it’s it’s been difficult to get past this fact that a lot of these algorithms, almost all of them, I would say, are so heavily based on your individual data. We don’t know how to study that in the aggregate.

And it’s not just that me or anyone else on the outside who has trouble because we don’t have the data. It’s even people within these companies who built the algorithm and who know how the algorithm works on paper, but they don’t know how it’s going to actually behave. It’s like Frankenstein’s monster: they built this thing, but they don’t know how it’s going to operate. So the only way I think you can really study it is if people on the inside with that data go out of their way and spend time and resources to study it.

Crichton: There are a lot of metrics used around evaluating misinformation and determining engagement on a platform. Coming from your mathematical background, do you think those measures are robust?

Giansiracusa: People try to debunk misinformation. But in the process, they might comment on it, they might retweet it or share it, and that counts as engagement. So a lot of these measurements of engagement, are they really looking at positive or just all engagement? You know, it kind of all gets lumped together?

This happens in academic research, too. Citations are the universal metric of how successful researches is. Well, really bogus things like Wakefield’s original autism and vaccines paper got tons of citations, a lot of them were people citing it because they thought it’s right, but a lot of it was scientists who were debunking it, they cite it in their paper to say, we demonstrate that this theory is wrong. But somehow a citation is a citation. So it all counts towards the success metric.

So I think that’s a bit of what’s happening with engagement. If I post something on my comments saying, “Hey, that’s crazy,” how does the algorithm know if I’m supporting it or not? They could use some AI language processing to try but I’m not sure if they are, and it’s a lot of effort to do so.

Crichton: Lastly, I want to talk a bit about GPT-3 and the concern around synthetic media and fake news. There’s a lot of fear that AI bots will overwhelm media with disinformation — how scared or not scared should we be?

Giansiracusa: Because my book really grew out of a class from experience, I wanted to try to stay impartial, and just kind of inform people and let them reach their own decisions. I decided to try to cut through that debate and really let both sides speak. I think the newsfeed algorithms and recognition algorithms do amplify a lot of harmful stuff, and that is devastating to society. But there’s also a lot of amazing progress of using algorithms productively and successfully to limit fake news.

There’s these techno-utopians, who say that AI is going to fix everything, we’ll have truth-telling, and fact-checking and algorithms that can detect misinformation and take it down. There’s some progress, but that stuff is not going to happen, and it never will be fully successful. It’ll always need to rely on humans. But the other thing we have is kind of irrational fear. There’s this kind of hyperbolic AI dystopia where algorithms are so powerful, kind of like singularity type of stuff that they’re going to destroy us.

When deep fakes were first hitting the news in 2018, and GPT-3 had been released a couple years ago, there was a lot of fear that, “Oh shit, this is gonna make all our problems with fake news and understanding what’s true in the world much, much harder.” And I think now that we have a couple of years of distance, we can see that they’ve made it a little harder, but not nearly as significantly as we expected. And the main issue is kind of more psychological and economic than anything.

So the original authors of GPT-3 have a research paper that introduces the algorithm, and one of the things they did was a test where they pasted some text in and expanded it to an article, and then they had some volunteers evaluate and guess which is the algorithmically-generated one and which article is the human-generated one. They reported that they got very, very close to 50% accuracy, which means barely above random guesses. So that sounds, you know, both amazing and scary.

But if you look at the details, they were extending like a one line headline to a paragraph of text. If you tried to do a full, The Atlantic-length or New Yorker-length article, you’re gonna start to see the discrepancies, the thought is going to meander. The authors of this paper didn’t mention this, they just kind of did their experiment and said, “Hey, look how successful it is.”

So it looks convincing, they can make these impressive articles. But here’s the main reason, at the end of the day, why GPT-3 hasn’t been so transformative as far as fake news and misinformation and all this stuff is concerned. It’s because fake news is mostly garbage. It’s poorly written, it’s low quality, it’s so cheap and fast to crank out, you could just pay your 16-year-old nephew to just crank out a bunch of fake news articles in minutes.

It’s not so much that math helped me see this. It’s just that somehow, the main thing we’re trying to do in mathematics is to be skeptical. So you have to question these things and be a little skeptical.

Apple’s CSAM detection tech is under fire — again

By Zack Whittaker

Apple has encountered monumental backlash to a new child sexual abuse material (CSAM) detection technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its billion-plus users, but the technology is already facing heat from security researchers who say the algorithm is producing flawed results.

NeuralHash is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy friendly, as it limits the scanning to just photos rather than other companies which scan all of a user’s file.

Apple does this by looking for images on a user’s device that have the same hash — a string of letters and numbers that can uniquely identify an image — that are provided by child protection organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement. Apple says the chance of a false positive is about one in one trillion accounts.

But security experts and privacy advocates have expressed concern that the system could be abused by highly resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable. NCMEC called critics the “screeching voices of the minority,” according to a leaked memo distributed internally to Apple staff.

Last night, Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published code to GitHub, allowing anyone to test the technology regardless of whether they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obfuscated code, but was able to reconstruct the technology to help other security researchers understand the algorithm better before it’s rolled out to iOS and macOS devices later this year.

It didn’t take long before others tinkered with the published code and soon came the first reported case of a “hash collision,” which in NeuralHash’s case is where two entirely different images produce the same hash. Cory Cornelius, a well-known research scientist at Intel Labs, discovered the hash collision. Ygvar confirmed the collision a short time later.

Hash collisions can be a death knell to systems that rely on cryptography to keep them secure, such as encryption. Over the years several well-known password hashing algorithms, like MD5 and SHA-1, were retired after collision attacks rendered them ineffective.

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”

When reached, an Apple spokesperson declined to comment on the record. But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses. Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.

It’s not just civil liberties groups and security experts that are expressing concern about the technology. A senior lawmaker in the German parliament sent a letter to Apple chief executive Tim Cook this week saying that the company is walking down a “dangerous path” and urged Apple not to implement the system.

How the law got it wrong with Apple Card

By Ram Iyer
Liz O'Sullivan Contributor
Liz O’Sullivan is CEO of Parity, a platform that automates model risk and algorithmic governance for the enterprise. She also advises the Surveillance Technology Oversight Project and the Campaign to Stop Killer Robots on all things artificial intelligence.
More posts by this contributor

Advocates of algorithmic justice have begun to see their proverbial “days in court” with legal investigations of enterprises like UHG and Apple Card. The Apple Card case is a strong example of how current anti-discrimination laws fall short of the fast pace of scientific research in the emerging field of quantifiable fairness.

While it may be true that Apple and their underwriters were found innocent of fair lending violations, the ruling came with clear caveats that should be a warning sign to enterprises using machine learning within any regulated space. Unless executives begin to take algorithmic fairness more seriously, their days ahead will be full of legal challenges and reputational damage.

What happened with Apple Card?

In late 2019, startup leader and social media celebrity David Heinemeier Hansson raised an important issue on Twitter, to much fanfare and applause. With almost 50,000 likes and retweets, he asked Apple and their underwriting partner, Goldman Sachs, to explain why he and his wife, who share the same financial ability, would be granted different credit limits. To many in the field of algorithmic fairness, it was a watershed moment to see the issues we advocate go mainstream, culminating in an inquiry from the NY Department of Financial Services (DFS).

At first glance, it may seem heartening to credit underwriters that the DFS concluded in March that Goldman’s underwriting algorithm did not violate the strict rules of financial access created in 1974 to protect women and minorities from lending discrimination. While disappointing to activists, this result was not surprising to those of us working closely with data teams in finance.

There are some algorithmic applications for financial institutions where the risks of experimentation far outweigh any benefit, and credit underwriting is one of them. We could have predicted that Goldman would be found innocent, because the laws for fairness in lending (if outdated) are clear and strictly enforced.

And yet, there is no doubt in my mind that the Goldman/Apple algorithm discriminates, along with every other credit scoring and underwriting algorithm on the market today. Nor do I doubt that these algorithms would fall apart if researchers were ever granted access to the models and data we would need to validate this claim. I know this because the NY DFS partially released its methodology for vetting the Goldman algorithm, and as you might expect, their audit fell far short of the standards held by modern algorithm auditors today.

How did DFS (under current law) assess the fairness of Apple Card?

In order to prove the Apple algorithm was “fair,” DFS considered first whether Goldman had used “prohibited characteristics” of potential applicants like gender or marital status. This one was easy for Goldman to pass — they don’t include race, gender or marital status as an input to the model. However, we’ve known for years now that some model features can act as “proxies” for protected classes.

If you’re Black, a woman and pregnant, for instance, your likelihood of obtaining credit may be lower than the average of the outcomes among each overarching protected category.

The DFS methodology, based on 50 years of legal precedent, failed to mention whether they considered this question, but we can guess that they did not. Because if they had, they’d have quickly found that credit score is so tightly correlated to race that some states are considering banning its use for casualty insurance. Proxy features have only stepped into the research spotlight recently, giving us our first example of how science has outpaced regulation.

In the absence of protected features, DFS then looked for credit profiles that were similar in content but belonged to people of different protected classes. In a certain imprecise sense, they sought to find out what would happen to the credit decision were we to “flip” the gender on the application. Would a female version of the male applicant receive the same treatment?

Intuitively, this seems like one way to define “fair.” And it is — in the field of machine learning fairness, there is a concept called a “flip test” and it is one of many measures of a concept called “individual fairness,” which is exactly what it sounds like. I asked Patrick Hall, principal scientist at bnh.ai, a leading boutique AI law firm, about the analysis most common in investigating fair lending cases. Referring to the methods DFS used to audit Apple Card, he called it basic regression, or “a 1970s version of the flip test,” bringing us example number two of our insufficient laws.

A new vocabulary for algorithmic fairness

Ever since Solon Barocas’ seminal paper “Big Data’s Disparate Impact” in 2016, researchers have been hard at work to define core philosophical concepts into mathematical terms. Several conferences have sprung into existence, with new fairness tracks emerging at the most notable AI events. The field is in a period of hypergrowth, where the law has as of yet failed to keep pace. But just like what happened to the cybersecurity industry, this legal reprieve won’t last forever.

Perhaps we can forgive DFS for its softball audit given that the laws governing fair lending are born of the civil rights movement and have not evolved much in the 50-plus years since inception. The legal precedents were set long before machine learning fairness research really took off. If DFS had been appropriately equipped to deal with the challenge of evaluating the fairness of the Apple Card, they would have used the robust vocabulary for algorithmic assessment that’s blossomed over the last five years.

The DFS report, for instance, makes no mention of measuring “equalized odds,” a notorious line of inquiry first made famous in 2018 by Joy Buolamwini, Timnit Gebru and Deb Raji. Their “Gender Shades” paper proved that facial recognition algorithms guess wrong on dark female faces more often than they do on subjects with lighter skin, and this reasoning holds true for many applications of prediction beyond computer vision alone.

Equalized odds would ask of Apple’s algorithm: Just how often does it predict creditworthiness correctly? How often does it guess wrong? Are there disparities in these error rates among people of different genders, races or disability status? According to Hall, these measurements are important, but simply too new to have been fully codified into the legal system.

If it turns out that Goldman regularly underestimates female applicants in the real world, or assigns interest rates that are higher than Black applicants truly deserve, it’s easy to see how this would harm these underserved populations at national scale.

Financial services’ Catch-22

Modern auditors know that the methods dictated by legal precedent fail to catch nuances in fairness for intersectional combinations within minority categories — a problem that’s exacerbated by the complexity of machine learning models. If you’re Black, a woman and pregnant, for instance, your likelihood of obtaining credit may be lower than the average of the outcomes among each overarching protected category.

These underrepresented groups may never benefit from a holistic audit of the system without special attention paid to their uniqueness, given that the sample size of minorities is by definition a smaller number in the set. This is why modern auditors prefer “fairness through awareness” approaches that allow us to measure results with explicit knowledge of the demographics of the individuals in each group.

But there’s a Catch-22. In financial services and other highly regulated fields, auditors often can’t use “fairness through awareness,” because they may be prevented from collecting sensitive information from the start. The goal of this legal constraint was to prevent lenders from discrimination. In a cruel twist of fate, this gives cover to algorithmic discrimination, giving us our third example of legal insufficiency.

The fact that we can’t collect this information hamstrings our ability to find out how models treat underserved groups. Without it, we might never prove what we know to be true in practice — full-time moms, for instance, will reliably have thinner credit files, because they don’t execute every credit-based purchase under both spousal names. Minority groups may be far more likely to be gig workers, tipped employees or participate in cash-based industries, leading to commonalities among their income profiles that prove less common for the majority.

Importantly, these differences on the applicants’ credit files do not necessarily translate to true financial responsibility or creditworthiness. If it’s your goal to predict creditworthiness accurately, you’d want to know where the method (e.g., a credit score) breaks down.

What this means for businesses using AI

In Apple’s example, it’s worth mentioning a hopeful epilogue to the story where Apple made a consequential update to their credit policy to combat the discrimination that is protected by our antiquated laws. In Apple CEO Tim Cook’s announcement, he was quick to highlight a “lack of fairness in the way the industry [calculates] credit scores.”

Their new policy allows spouses or parents to combine credit files such that the weaker credit file can benefit from the stronger. It’s a great example of a company thinking ahead to steps that may actually reduce the discrimination that exists structurally in our world. In updating their policies, Apple got ahead of the regulation that may come as a result of this inquiry.

This is a strategic advantage for Apple, because NY DFS made exhaustive mention of the insufficiency of current laws governing this space, meaning updates to regulation may be nearer than many think. To quote Superintendent of Financial Services Linda A. Lacewell: “The use of credit scoring in its current form and laws and regulations barring discrimination in lending are in need of strengthening and modernization.” In my own experience working with regulators, this is something today’s authorities are very keen to explore.

I have no doubt that American regulators are working to improve the laws that govern AI, taking advantage of this robust vocabulary for equality in automation and math. The Federal Reserve, OCC, CFPB, FTC and Congress are all eager to address algorithmic discrimination, even if their pace is slow.

In the meantime, we have every reason to believe that algorithmic discrimination is rampant, largely because the industry has also been slow to adopt the language of academia that the last few years have brought. Little excuse remains for enterprises failing to take advantage of this new field of fairness, and to root out the predictive discrimination that is in some ways guaranteed. And the EU agrees, with draft laws that apply specifically to AI that are set to be adopted some time in the next two years.

The field of machine learning fairness has matured quickly, with new techniques discovered every year and myriad tools to help. The field is only now reaching a point where this can be prescribed with some degree of automation. Standards bodies have stepped in to provide guidance to lower the frequency and severity of these issues, even if American law is slow to adopt.

Because whether discrimination by algorithm is intentional, it is illegal. So, anyone using advanced analytics for applications relating to healthcare, housing, hiring, financial services, education or government are likely breaking these laws without knowing it.

Until clearer regulatory guidance becomes available for the myriad applications of AI in sensitive situations, the industry is on its own to figure out which definitions of fairness are best.

All the Ways Spotify Tracks You—and How to Stop It

By Matt Burgess, WIRED UK
Whether you're listening to workout music or a "cooking dinner" playlist, the app can show you ads based on your mood and what you're doing right now.
❌