FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Data was the new oil, until the oil caught fire

By Danny Crichton

We’ve been hearing how “data is the new oil” for more than a decade now, and in certain sectors, it’s a maxim that has more than panned out. From marketing and logistics to finance and product, decision-making is now dominated by data at all levels of most big private orgs (and if it isn’t, I’d be getting a résumé put together, stat).

So it might be a something of a surprise to learn that data, which could transform how we respond to the increasingly deadly disasters that regularly plague us, has been all but absent from much of emergency response this past decade. Far from being a geyser of digital oil, disaster response agencies and private organizations alike have for years tried to swell the scope and scale of the data being inputted into disaster response, with relatively meager results.

That’s starting to change though, mostly thanks to the internet of things (IoT), and frontline crisis managers today increasingly have the data they need to make better decisions across the resilience, response, and recovery cycle. The best is yet to come — with drones flying up, simulated visualizations, and artificial intelligence-induced disasters — what we’re seeing today on the frontlines is only the beginning of what could be a revolution in disaster response in the 2020s.

The long-awaited disaster data deluge has finally arrived

Emergency response is a fight against the fog of war and the dreadful ticking of the clock. In the midst of a wildfire or hurricane, everything can change in a matter of seconds — even milliseconds if you aren’t paying attention. Safe roads ferrying evacuees can suddenly become impassable infernos, evacuation teams can reposition and find themselves spread far too thin, and unforeseen conditions can rapidly metastasize to cover the entire operating environment. An operations center that once had perfect information can quickly find it has no ground truth at all.

Unfortunately, even getting raw data on what’s happening before and during a disaster can be extraordinarily difficult. When we look at the data revolution in business, part of the early success stems from the fact that companies were always heavily reliant on data to handle their activities. Digitalization was and is the key word: moving from paper to computers in order to transform latent raw data into a form that was machine-readable and therefore analyzable. In business, the last ten years was basically upgrading to version two from version one.

In emergency management however, many agencies are stuck without a version at all. Take a flood — where is the water and where is it going? Up until recently, there was no comprehensive data on where waters rose from and where they sloshed to. When it came to wildfires, there were no administrative datasets on where every tree in the world was located and how prone each is to fire. Even human infrastructure like power lines and cell towers often had little interface with the digital world. They stood there, and if you couldn’t see them, they couldn’t see you.

Flood modeling is on the cutting edge of disaster planning and response. Image Credits: CHANDAN KHANNA/AFP via Getty Images

Models, simulations, predictions, analysis: all of these are useless without raw data, and in the disaster response realm, there was no detailed data to be found.

After years of promising an Internet of Things (IoT) revolution, things are finally internet-izing, with IoT sensors increasingly larding up the American and world landscape. Temperature, atmospheric pressure, water levels, humidity, pollution, power, and other sensors have been widely deployed, emitting constant streams of data back into data warehouses ready for analysis.

Take wildfires in the American West. It wasn’t all that long ago that the U.S. federal government and state firefighting agencies had no knowledge of where a blaze was taking place. Firefighting has been “100 years of tradition unimpeded by progress,” Tom Harbour, head of fire response for a decade at the U.S. Forest Service and now chief fire officer at Cornea put it.

And he’s right. After all, firefighting is a visceral activity — responders can see the fires, even feel the burning heat echoing off of their flesh. Data wasn’t useful, particularly in the West where there are millions of acres of land and large swaths are sparsely populated. Massive conflagrations could be detected by satellites, but smoldering fires in the brush would be entirely invisible to the geospatial authorities. There’s smoke over California — exactly what is a firefighter on the ground supposed to do with such valuable information?

Today after a decade of speculative promise, IoT sensors are starting to clear a huge part of this fog. Aaron Clark-Ginsberg, a social scientist at RAND Corporation who researches community resilience, said that air quality sensors have become ubiquitous since they are “very cheap [and] pretty easy to use” and can offer very fine-grained understandings of pollution — a key signal, for instance, of wildfires. He pointed to the company Purple Air, which in addition to making sensors, also produces a popular consumer map of air quality, as indicative of the potential these days for technology.

Maps are the critical intersection for data in disasters. Geospatial information systems (GIS) form the basis for most planning and response teams, and no company has a larger footprint in the sector than privately-held Esri. Ryan Lanclos, who leads public safety solutions at the company, pointed to the huge expansion of water sensors as radically changing responses to certain disasters. “Flood sensors are always pulsing,“ he said, and with a “national water model coming out of the federal government ,” researchers can now predict through GIS analysis how a flood will affect different communities with a precision unheard of previously.

Digital maps and GIS systems are increasingly vital for disaster planning and response, but paper still remains quite ubiquitous. Image Credits: Paul Kitagaki Jr.-Pool/Getty Images

Cory Davis, the director of public safety strategy and crisis response at Verizon (which, through our parent company Verizon Media, is TechCrunch’s ultimate owner), said that all of these sensors have transformed how crews work to maintain infrastructure as well. “Think like a utility that is able to put a sensor on a power line — now they have sensors and get out there quicker, resolve it, and get the power back up.”

He noted one major development that has transformed sensors in this space the last few years: battery life. Thanks to continuous improvements in ultra-low-power wireless chips as well as better batteries and energy management systems, sensors can last a really long time in the wilderness without the need for maintenance. “Now we have devices that have ten-year battery lives,” he said. That’s critical, because it can be impossible to connect these sensors to the power grid in frontier areas.

The same line of thinking holds true at T-Mobile as well. When it comes to preventative planning, Jay Naillon, senior director of national technology service operations strategy at the telco, said that “the type of data that is becoming more and more valuable for us is the storm surge data — it can make it easier to know we have the right assets in place.” That data comes from flood sensors that can offer real-time warnings signals to planners across the country.

Telecom interest — and commercial interest in general — has been critical to accelerating the adoption of sensors and other data streams around disasters. While governments may be the logical end user of flood or wildfire data, they aren’t the only ones interested in this visibility. “A lot of consumers of that information are in the private sector,” said Jonathan Sury, project director at the National Center for Disaster Preparedness at the Earth Institute at Columbia University. “These new types of risks, like climate change, are going to affect their bottom lines,” and he pointed to bond ratings, insurance underwriting and other areas where commercial interest in sensor data has been profound.

Sensors may not literally be ubiquitous, but they have offered a window into the ambiguity that emergency managers have never had visibility into before.

Finally, there is the extensive datasets around mobile usage that have become ubiquitous throughout much of the world. Facebook’s Data for Good project, for instance, provides data layers around connectivity — are users connecting from one place and then later connecting from a different location, indicating displacement? That sort of data from the company and telcos themselves can help emergency planners scout out how populations are shifting in real-time.

Data, data, on the wall — how many AIs can they call?

Rivulets of data have now turned into floods of information, but just like floodwaters rising in cities across the world, the data deluge now needs a response all its own. In business, the surfeit of big data has been wrangled with an IT stack from data warehouses all the way to business intelligence tools.

If only data for disasters could be processed so easily. Data relevant for disasters is held by dozens of different organizations spanning the private, public, and non-profit sectors, leading to huge interoperability problems. Even when the data can be harmonized, there are large challenges in summarizing the findings down to an actual decision a frontline responder can use in their work — making AI a tough sale still today, particularly outside of planning. As Davis of Verizon put it, “now that they have this plethora of data, a lot of cities and federal agencies are struggling with how to use it.”

Unfortunately, standardization is a challenge at all scales. Globally, countries mostly lack interoperability, although standards are improving over time. Amir Elichai, the founder and CEO of 911 call-handling platform Carbyne, said that “from a technology standpoint and a standards standpoint, there is a big difference between countries,” noting that protocols from one country often have to be completely rewritten to serve a different market.

Tom Cotter, director of emergency response and preparedness at health care disaster response organization Project HOPE, said that even setting up communications between responders can be challenging in an international environment. “Some countries allow certain platforms but not others, and it is constantly changing,” he said. “I basically have every single technology communication platform you can possibly have in one place.”

One senior federal emergency management official acknowledged that data portability has become increasingly key in procurement contracts for technology, with the government recognizing the need to buy commercially-available software rather than custom-designed software. That message has been picked up by companies like Esri, with Lanclos stating that “part of our core mission is to be open and … create data and to share that openly to the public or securely through open standards.”

For all its downsides though, the lack of interoperability can be ironically helpful for innovation. Elichai said that the “lack of standards is an advantage — you are not buying into a legacy standard,” and in some contexts where standards are lacking, quality protocols can be built with the assumption of a modern data workflow.

Even with interoperability though, the next challenge becomes data sanitation — and disaster data is dirty as … well, something. While sensor streams can be verified and cross-checked with other datasets, in recent years there has been a heavy increase in the quantity of citizen-submitted information that has to be carefully vetted before it is disseminated to first responders or the public.

With citizens having more access to smartphones than ever, emergency planners have to sanitize uploaded data uploaded in order to verify and make it useful. Image Credits: TONY KARUMBA/AFP via Getty Images

Bailey Farren, CEO and co-founder of disaster communications platform Perimeter, said that “sometimes citizens have the most accurate and real-time information, before first responders show up — we want citizens to share that with …government officials.” The challenge is how to filter the quality goods from the unhelpful or malicious. Raj Kamachee, the CIO of Team Rubicon, a non-profit which assembles teams of volunteer military veterans to respond to natural disasters, said that verification is critical, and it’s a key element of the infrastructure he has built at the organization since joining in 2017. “We’ve gotten more people using it so more feedback [and] more data [is] coming through the pipes,” he said. “So creating a self-service, a very collaborative approach.”

With quality and quantity, the AI models should come, right? Well, yes and no.

Sury of Columbia wants to cool down at least some of the hype around AI. “The big caveat with all of these machine learning and big data applications is that they are not a panacea — they are able to process a lot of disparate information, [but] they’re certainly not going to tell us exactly what to do,” he said. “First responders are already processing a lot of information,” and they don’t necessarily need more guidance.

Instead, AI in disasters is increasingly focused on planning and resilience. Sury pointed to OneConcern, a resiliency planning platform, as one example of how data and AI can be combined in the disaster planning process. He also pointed to the CDC’s Social Vulnerability Index and risk tools from FEMA that integrate different data signals into scalar values by emergency planners to optimize their contingency plans.

Yet, almost everyone I talked to was much more hesitant about the power of AI. As I discussed a bit in part one of this series regarding the disaster sales cycle, data tools have to be real-time and perfect every time given the lives that are on the line. Kamachee of Team Rubicon noted that when choosing tools, he avoids whiz-bang and instead looks at the pure utility of individual vendors. “We go high tech, but we prepare for low tech,” he said, empathizing that in disaster response, everything must be agile and adaptable to changing circumstances.

Elichai of Carbyne saw this pattern in his sales. There’s a “sensitivity in our market and the reluctance from time to time to adopt” new technologies he said, but acknowledged that “there is no doubt that AI at a certain point will provide benefits.”

Naillon of T-Mobile had similar views from the operator perspective, saying that “I can’t say that we really leverage AI very much” in the company’s disaster planning. Instead of AI as brain, the telecom company simply uses data and forecast modeling to optimally position equipment — no fancy GANs required.

Outside of planning, AI has helped in post-disaster recovery, and specifically around damage assessments. After a crisis transpires, assessments of infrastructure and private property have to be made in order for insurance claims to be filed and for a community to move forward. Art delaCruz, COO and president of Team Rubicon, noted that technology and a flourish of AI has helped significantly around damage assessments. Since his organization often helps rebuild communities in the course of its work, triaging damage is a critical element of its effective response strategy.

There’s a brighter future, other than that brightness from the sun that is going to burn us to a crisp, right?

So AI today is helping a bit with resilience planning and disaster recovery and not so much during emergency response itself, but there is certainly more to come across the entire cycle. Indeed, there is a lot of excitement about the future of drones, which are increasingly being used in the field, but there are concerns long term about whether AI and data will ultimately cause more problems than they solve.

Drones would seem to have an obvious value for disaster response, and indeed, they have been used by teams to get additional aerial footage and context where direct access by responders is limited. Kamachee of Team Rubicon noted that in the Bahamas on a mission, response teams used drones to detect survivors, since major roads were blocked. The drones snapped images that were processed using AI, and helped the team to identify those survivors for evacuation. He described drones and their potential as “sexy; very, very cool.”

Aerial views from drones can give disaster response teams much better real-time information, particularly in areas where on-the-ground access is limited. Image Credits: Mario Tama/Getty Images

Cotter of Project HOPE similarly noted that faster data processing translates to better responses. “Ultimately speed is what saves lives in these disasters,” he said. We’re “also able to manage more responses remotely [and] don’t have to send as many people downrange,” giving response teams more leverage in resource-constrained environments.

“I see more emergency management agencies using drone technology — search and rescue, aerial photography,” Davis of Verizon said, arguing that operators often have a mentality of “send a machine into a situation first.” He continued, arguing, “artificial intelligence is going to continue to get better and better and better [and] enable our first responders to respond more effectively, but also more efficiently and safer.”

With data flooding in from sensors and drones and processed and verified better than ever, disaster response can improve, perhaps even better than Mother Nature can galvanize her increasingly deadly whims. Yet, there is one caveat: will the AI algorithms themselves cause new problems in the future?

Clark-Ginsburg of RAND, perhaps supplying that typical RANDian alternatives analysis, said that these solutions can also create problems themselves, “technological risks leading to disaster and the world of technology facilitating disaster.” These systems can break, they can make mistakes, and more ominously — they can be sabotaged to increase chaos and damage.

Bob Kerrey, a co-chair of the 9/11 Commission, former senator and governor of Nebraska, and currently the board chairman of Risk & Return, a disaster response VC fund and philanthropy I profiled recently, pointed to cybersecurity as increasingly a wild card in many responses. “There wasn’t a concept called zero days — let alone a market for zero days — in 2004 [when the 9/11 Commission was doing its work], and now there is.” With the 9/11 terrorist attacks, “they had to come here, they had to hijack planes … now you don’t need to hijack planes to damage the United States,” noting that hackers “can be sitting with a bunch of other guys in Moscow, in Tehran, in China, or even your mother’s basement.”

Data is a revolution in the making for disaster response, but it may well cause a whole second-order set of problems that didn’t exist before. What is giveth is taketh away. The oil gushes, but then the well suddenly runs dry – or simply catches fire.


Future of Technology and Disaster Response Table of Contents


Is Washington prepared for a geopolitical ‘tech race’?

By Danny Crichton

When Secretary of State Antony Blinken and National Security Advisor Jake Sullivan sat down with Chinese officials in Anchorage, Alaska for the first high-level bilateral summit of the new administration, it was not a typical diplomatic meeting. Instead of a polite but restrained diplomatic exchange, the two sides traded pointed barbs for almost two hours. “There is growing consensus that the era of engagement with China has come to an unceremonious close,” wrote Sullivan and Kurt Campbell, the Administration’s Asia czar also in attendance, back in 2019. How apt that they were present for that moment’s arrival.

A little more than one hundred days into the Biden Administration, there is no shortage of views on how it should handle this new era of Sino-American relations. From a blue-ribbon panel assembled by former Google Chairman Eric Schmidt to a Politico essay from an anonymous former Trump Administration official that consciously echoes (in both its name and its author’s anonymity) George Kennan’s famous “Long Telegram” laying out the theory of Cold War containment, to countless think tank reports, it seems everyone is having their say.

What is largely uncontroversial though is that technology is at the center of U.S.-China relations, and any competition with China will be won or lost in the digital and cyber spheres. “Part of the goal of the Alaska meeting was to convince the Chinese that the Biden administration is determined to compete with Beijing across the board to offer competitive technology,” wrote David Sanger in the New York Times shortly afterward.

But what, exactly, does a tech-centered China strategy look like? And what would it take for one to succeed?

Tech has brought Republicans and Democrats uneasily together

One encouraging sign is that China has emerged as one of the few issues on which even Democrats agree that President Trump had some valid points. “Trump really was the spark that reframed the entire debate around U.S.-China relations in DC,” says Jordan Schneider, a China analyst at the Rhodium Group and the host of the ChinaTalk podcast and newsletter.

While many in the foreign policy community favored some degree of cooperation with China before the Trump presidency, now competition – if not outright rivalry – is widely assumed. “Democrats, even those who served in the Obama Administration, have become much more hawkish,” says Erik Brattberg of the Carnegie Endowment for International Peace. Trump has caused “the Overton Window on China [to become] a lot narrower than it was before,” adds Schneider.

The US delegation led by Secretary of State Antony Blinken face their Chinese counterparts at the opening session of US-China talks at the Captain Cook Hotel in Anchorage, Alaska on March 18, 2021. Image Credits: FREDERIC J. BROWN/POOL/AFP via Getty Images

As the U.S.-China rivalry has evolved, it has become more and more centered around competing philosophies on the use of technology. “At their core, democracies are open systems that believe in the free flow of information, whereas for autocrats, information is something to be weaponized and stifled in the service of the regime,” says Lindsay Gorman, Fellow for Emerging Technologies at the German Marshall Fund. “So it’s not too surprising that technology, so much of which is about how we store and process and leverage information, has become such a focus of the U.S.-China relationship and of the [broader] democratic-autocratic competition around the world.”

Tech touches everything now – and the stakes could not be higher. “Tech and the business models around tech are really ‘embedded ideology,’’ says Tyson Barker of the German Council on Foreign Relations. “So what tech is and how it is used is a form of governance.”

What does that mean in practice? When Chinese firms expand around the world, Barker tells me, they bring their norms with them. So when Huawei builds a 5G network in Latin America, or Alipay is adopted for digital payments in Central Europe, or Xiaomi takes more market share in Southeast Asia, they are helping digitize those economies on Chinese terms using Chinese norms (as opposed to American ones). The implication is clear: whoever defines the future of technology will determine the rest of the twenty-first century.

That shifting balance has focused minds in Washington. “I think there is a strong bipartisan consensus that technology is at the core of U.S.-China competition,” says Brattberg. But, adds Gorman, “there’s less agreement on what the prescription should be.” While the Democratic experts now ascendant in Washington agree with Trump’s diagnosis of the China challenge, they believe in a vastly different approach from their Trump Administration predecessors.

Out, for instance, are restrictions on Chinese firms just for being Chinese. “That was one of the problems with Trump,” says Walter Kerr, a former U.S. diplomat who publishes the China Journal Review. “Trump cast broad strokes, targeting firms whether it was merited or not. Sticking it to the Chinese is not a good policy.”

Instead the focus is on inward investment – and outward cooperation.

Foreign policy is domestic policy

Democrats are first shoring up America’s economic challenges – in short, be strong at home to be strong abroad. “There’s no longer a bright line between foreign and domestic policy,” President Biden said in his first major foreign policy speech. “Every action we take in our conduct abroad, we must take with American working families in mind. Advancing a foreign policy for the middle class demands urgent focus on our domestic economic renewal.”

This is a particular passion of Jake Sullivan, Biden’s national security advisor, who immersed himself in domestic policy while he was Hillary Clinton’s chief policy aide during her 2016 presidential campaign. “We’ve reached a point where foreign policy is domestic policy, and domestic policy is foreign policy,” he told NPR during the transition.

Jake Sullivan, White House national security adviser, speaks during a news conference Image Credits: Jim Lo Scalzo/EPA/Bloomberg via Getty Images

This is increasingly important for technology, as concern grows that America is lagging behind on research and development. “We’re realizing that we’ve underinvested in the government grants and research and development projects that American companies [need] to become highly innovative in fields like quantum computing, AI, biotechnology, etc,” says Kerr.

“Rebuilding” or “sustaining” America’s “technological leadership” is a major theme of the Longer Telegram and is the very operating premise of the report of the China Strategy Group assembled by Eric Schmidt, former executive chairman of Alphabet, Google’s parent company, and the first chair of the Department of Defense’s Innovation Advisory Board. Those priorities have only become more important during the pandemic. It’s a question of “how do we orient the research system to fill in the industrial gaps that have been made very clear by the COVID crisis?” says Schneider of Rhodium.

The most disastrous sales cycle in the world

By Danny Crichton

Startups constantly talk about being mission-oriented, but it’s hard to take most of those messages seriously when the mission is optimizing cash flow for tax efficiency. However, a new generation of startups is emerging that are taking on some of the largest global challenges and bringing the same entrepreneurial grit, operational excellence, and technical brilliance to bear on actual missions — ones that may well save thousands of lives.

ClimateTech has been a huge beneficiary of this trend in general, but one small specialty has caught my eye: disaster response. It’s a category for software services that’s percolated for years with startups here and there, but now a new crop of founders is taking on the challenges of this space with renewed urgency and vigor.

As the elevator pitch would have it, disaster response is hitting hockey stick growth. 2020 was a brutal year, and in more ways than just the global COVID-19 pandemic. The year also experienced a record number of hurricanes, among the worst wildfire seasons in the Western United States, and several megastorms all across the world. Climate change, urbanization, population growth, and poor response practices have combined to create some of the most dangerous conditions humanity has ever collectively faced.

I wanted to get a sense of what the disaster response market has in store this decade, so over the past few weeks, I have interviewed more than 30 startup founders, investors, government officials, utility execs and more to understand this new landscape and what’s changed. In this four-part series on the future of technology and disaster response, to be published this weekend and next, we’ll look at the sales cycle in this market, how data is finally starting to flow into disaster response, how utilities and particularly telcos are dealing with internet access issues, and how communities are redefining disaster management going forward.

Before we get into all the tech developments in disaster response and resilience though, it’s important to ask a basic question: if you build it, will they come? The resounding answer from founders, investors, and government procurement officials was simple: no.

In fact, in all my conversations for this series, the hell of the emergency management sales cycle came up repeatedly, with more than one individual describing it as possibly the toughest sale that any company could make in the entire world. That view might be surprising in a market that easily runs into the tens of billions of dollars if the budgets for procurement are aggregated across local, state, federal, and international governments. Yet, as we will see, the unique dynamics of this market make almost any traditional sales approach useless.

Despite that pessimism though, that doesn’t mean sales are impossible, and a new crop of startups are piercing the barriers of entry in this market. We’ll look at the sales and product strategies that startups are increasingly relying on today to break through.

The sale from hell

Few will be surprised that government sales are hard. Generations of govtech startup founders have learned that slow sales cycles, byzantine procurement processes, cumbersome verification and security requirements, and a general lassitude among contract officers makes for a tough battlefield to close on revenue. Many government agencies now have programs to specifically onboard startups, having discovered just how hard it is for new innovations to run through their gauntlet.

Emergency management sales share all the same problems as other govtech startups, but then they deal with about a half dozen more problems that make the sales cycle go from exhausting to infernal hell.

The first and most painful is the dramatic seasonality of the sales in the emergency space. Many agencies that operate on seasonal disasters — think hurricanes, wildfires, winter storms, and more — often go through an “action” period where they respond to these disasters, and then transition into a “planning” period where they assess their performance, determine what changes are needed for next season, and consider what tools might be added or removed to increase the effectiveness of their responders.

Take Cornea and Perimeter, two startups in the wildfire response space that I profiled recently. Both of the teams described how they needed to think in terms of fire seasons when it came to product iteration and sales. “We took two fire seasons to beta test our technology … to solve the right problem the right way,” Bailey Farren, CEO and co-founder of Perimeter, said. “We actually changed our focus on beta testing during the [2019 California] Kincaid fire.”

In this way, disaster tech could be compared to edtech, where school technology purchases are often synchronized with the academic calendar. Miss the June through August window in the U.S. education system, and a startup is looking at another year before it will get another chance at the classroom.

Edtech might once have been a tougher sale to make in order to thread that three-month needle, but disaster response is getting more difficult every year. Climate change is exacerbating the length, severity, and damage caused by all types of disasters, which means that responding agencies that might have had six months or more out-of-season to plan in the past are sometimes working all year long just to respond to emergencies. That gives little time to think about what new solutions an agency needs to purchase.

Worse, unlike the standardized academic calendar, disasters are much less predictable these days as well. Flood and wildfire seasons, for instance, used to be relatively concentrated in certain periods of the year. Now, such emergencies can emerge practically year-round. That means that procurement processes can both start and freeze on a moment’s notice as an agency has to respond to its mission.

Seasonality doesn’t just apply to the sales cycle though — it also applies to the budgets of these agencies. While they are transpiring, disasters dominate the eye of the minds for citizens and politicians, but then we forget all about them until the next catastrophe. Unlike the annual consistency of other government tech spending, disaster tech funding often comes in waves.

One senior federal emergency management official, who asked not to be named since he wasn’t authorized to speak publicly, explained that consistent budgets and the ability to spend them quickly is quite limited during “blue sky days” (i.e. periods without a disaster), and agencies like his have to rely on piecing together supplementary disaster funds when Congress or state legislatures authorize additional financing. The best agencies have technological roadmaps on hand so that when extra funding comes in, they can use it immediately to realize their plans, but not all agencies have the technical planning resources to be that prepared.

Amir Elichai, the CEO and co-founder of Carbyne, a cloud-native platform for call handling in 911 centers, said that this wave of interest crested yet again with the COVID-19 pandemic last year, triggering huge increases in attention and funding around emergency response capabilities. “COVID put a mirror in front of government faces and showed them that ‘we’re not ready’,” he said.

Perhaps unsurprisingly, next-generation 911 services (typically dubbed NG911), which have been advocated for years by the industry and first responders, is looking at a major financing boost. President Biden’s proposed infrastructure bill would add $15 billion to upgrade 911 capabilities in the United States — funding that has been requested for much of the last decade. Just last year, a $12 billion variant of that bill failed in the Senate after passing the U.S. House of Representatives.

Sales are all about providing proverbial painkillers versus vitamins to customers, and one would expect that disaster response agencies looking to upgrade their systems would be very much on the painkiller side. After all, the fear and crisis surrounding these agencies and their work would seem to bring visceral attention to their needs.

Yet, that fear actually has the opposite effect in many cases, driving attention away from systematic technology upgrades in favor of immediate acute solutions. One govtech VC, who asked not to be named to speak candidly about the procurement process his companies go through, said that “we don’t want to paint the picture that the world is a scary and dangerous place.” Instead, “the trick is to be … focused on the safety side rather than the danger.” Safety is a much more prevalent and consistent need than sporadically responding to emergencies.

When a wave of funding finally gets approved though, agencies often have to scramble to figure out what to prioritize now that the appropriated manna has finally dropped from the legislative heaven. Even when startups provide the right solutions, scrying which problems are going to get funded in a particular cycle requires acute attention to every customer.

Josh Mendelsohn, the managing partner at startup studio and venture fund Hangar, said that “the customers have no shortage of needs that they are happy to talk about … the hardest part is how you narrow the funnel — what are the problems that are most meritorious?” That merit can, unfortunately, evolve very rapidly as mission requirements change.

Let’s say all the stars line up though — the agencies have time to buy, they have a need, and a startup has the solution that they want. The final challenge that’s probably the toughest to overcome is simply the lack of trust that new startups have with agencies.

In talking to emergency response officials the past few weeks, reliability unsurprisingly came up again and again. Responding to disasters is mission-critical work, and nothing can break in the field or in the operations center. Frontline responders still use paper and pens in lieu of tablets or mobile phones since they know that paper is going to work every single time and not run out of battery juice. The move fast and break things ethos of Silicon Valley is fundamentally incompatible with this market.

Seasonality, on-and-off funding, lack of attention, procurement scrambling, and acute reliability requirements combine to make emergency management sales among the hardest possible for a startup. That doesn’t even get into all the typical govtech challenges like integrating with legacy systems, the massive fragmentation of thousands of emergency response agencies littered across the United States and globally, and the fact that in many agencies, people aren’t that interested in change in the first place. As one individual in the space described how governments approach emergency technology, “a lot of departments are looking at it as maybe I can hit retirement before I have to deal with it.”

The strategies for breaking out of limbo

So the sales cycle is hell. Why, then, are VCs dropping money in the sector? After all, we’ve seen emergency response data platform RapidSOS raise $85 million just a few months ago, about the same time Carbyne raised $25 million. There are quite a few more startups at the earliest phases that have raised pre-seed and seed investment as well.

The key argument that nearly everyone in this sector agreed on is that founders (and their investors) have to throw away their private-sector sales playbooks and rebuild their approach from the bottom up to sell specifically to these agencies. That means devising entirely different strategies and tactics to secure revenue performance.

The first and most important approach is, in some respects, to not even start with a company at all, but rather to start learning what people in this field actually do. As the sales cycle perhaps indicates, disaster response is unlike any other work. The chaos, the rapidly changing environment, the multi-disciplinary teams and cross-agency work that has to take place for a response to be effective have few parallels to professional office work. Empathy is key here: the responder that uses paper might have nearly lost their life in the field when their device failed. A 911 center operator may have listened to someone perish in real-time as they scrambled to find the right information from a software database.

In short, it’s all about customer discovery and development. That’s not so different from the enterprise world, but patience radiated out of many of my conversations with industry participants. It just takes more time — sometimes multiple seasons — to figure out precisely what to build and how to sell it effectively. If an enterprise SaaS product can iterate to market-fit in six months, it might take two to three years in the government sector to reach an equivalent point.

Michael Martin of RapidSOS said “There is no shortcut to doing customer discovery work in public service.” He noted that “I do think there is a real challenge between the arrogance of the Silicon Valley tech community and the reality of these challenges“ in public safety, a gap that has to be closed if a startup wants to find success. Meanwhile, Bryce Stirton, president and co-founder of public-safety company Responder Corp, said that “The end user is the best way to look at all the challenges … what are all the boxes the end user has to check to use a new technology?”

Mendelsohn of Hangar said that founders need to answer some tough questions in that process. “Ultimately, what are your entry points,” he asked. “Cornea has had to go through that customer discovery process … it all feels necessary, but what are the right things that require the least amount of behavior change to have impact immediately?”

Indeed, that process is appreciated on the other side as well. The federal emergency management official said, “everyone has a solution, but no one asked me about my problem.” Getting the product right and having it match the unique work that takes place in this market is key.

Let’s say you have a great product though — how do you get it through the perilous challenges of the procurement process? Here, answers differed widely, and they offer multiple strategies on how to approach the problem.

Martin of RapidSOS said that “government does not have a good model for procuring new services to solve problems.” So, the company chose to make its services free for government. “In three years, we went from no agencies using our stuff to all agencies using our stuff, and that was based on not making it a procurement problem,” he said. The company’s business model is based on having paid corporate partners who want to integrate their data into 911 centers for safety purposes.

That’s a similar model used by MD Ally, which received a $3.5 million seed check from General Catalyst this past week. The company adds telehealth referral services into 911 dispatch systems, and CEO and founder Shanel Fields emphasized that she saw an opportunity to create a revenue engine from the physician and mental health provider side of her market while avoiding government procurement.

Outside of what might be dubbed “Robinhood for government” (aka, just offering a service for free), another approach is to link up with more well-known and trusted brand names to offer a product that has the innovation of a startup but the reliability of an established player. Stirton of Responder said “we learned in [this market] that it takes more than just capital to get companies started in this space.” What he found worked was building private-sector partnerships to bring a joint offering to governments. For instance, he noted cloud providers Amazon Web Services and Verizon have good reputations with governments and can get startups over procurement hurdles (TechCrunch is owned by Verizon Media, which is owned by Verizon).

Elichai of Carbyne notes that much of his sales is done through integration partners, referencing CenterSquare as one example. For 911 services, “The U.S. market is obviously the most fragmented” and so partners allow the company to avoid selling to thousands of different agencies. “We are usually not selling direct to governments,” he said.

Partners can also help deal with the problem of localism in emergency procurement: many government agencies don’t know precisely what to buy, so they simply buy software that is offered by companies in their own backyard. Partners can offer a local presence while also allowing a startup to have a nimble national footprint.

Another angle on partners is building out a roster of experienced but retired government executives who can give credibility to a startup through their presence and networks. Even more than in enterprise, government officials, particularly in emergency management, have to work and trust one another given the closely-coupled work that they perform. Hearing a positive recommendation from a close contact down the street can readily change the tenor of a sales conversation.

Finally, as much as emergency management software is geared for governments, private sector companies increasingly have to consider much of the same tooling to protect their operations. Many companies have distributed workforces, field teams, and physical assets they need to protect, and often have to respond to disasters in much the same way that governments do. For some startups, it’s possible to bootstrap in the private sector early on while continuing to assiduously develop public sector relationships.

In short, a long-term customer development program coupled with quality partnerships and joint offerings while not forgetting the private sector offers the best path for startups to break through into these agencies.

The good news is that the hard work can be rewarded. Not only are there serious dollars that flow through these agencies, but the agencies themselves know that they need better technology. Tom Harbour, who is chief fire officer at Cornea and formerly national director of fire management at the U.S. Forest Service, notes that “These are billions of dollars we spend … and we know we can be more efficient.” Government doesn’t always make it easy to create efficiency, but for the founders willing to go the distance, they can build impactful, profitable, and mission-driven companies.

Biden’s labor secretary thinks many gig workers should be reclassified as employees

By Taylor Hatmaker

Biden Labor Secretary Marty Walsh charged into the white hot issue of the gig economy Thursday, asserting that many people working without benefits in the gig economy should be classified as employees instead.

In an interview with Reuters, Walsh said that the Department of Labor is “looking at” the gig economy, hinting that worker reclassification could be a priority in the Biden administration.

“… In a lot of cases gig workers should be classified as employees,” Walsh said. “In some cases they are treated respectfully and in some cases they are not and I think it has to be consistent across the board.”

Walsh also said that the labor department would be talking to companies that benefit from gig workers to ensure that non-employees at those companies have the same benefits that an “average employee” in the U.S. would have.

“These companies are making profits and revenue and I’m not [going to] begrudge anyone for that because that’s what we are about in America… but we also want to make sure that success trickles down to the worker,” Walsh said.

Walsh’s comments aren’t backed by federal action, yet anyway, but they still made major waves among tech companies that leverage non-employee labor. Uber and Lyft stock dipped on the news Thursday, along with Doordash.

In the interview, Walsh also touched on pandemic-related concerns about gig workers who lack unemployment insurance and health care through their employers. The federal government has picked up the slack during the pandemic with two major bills granting gig workers some benefits, but otherwise they’re largely without a safety net.

Reforming labor laws has been a tenet of Biden’s platform for some time and the president has been very vocal about bolstering worker protections and supporting organized labor. One section of then President-elect Biden’s transition site was devoted to expanding worker protections, calling the misclassification of employees as contract workers an “epidemic.”

Biden echoed his previous support for labor unions during a joint address to Congress Wednesday night, touting the Protecting the Right to Organize Act — legislation that would protect workers looking to form or join unions. That bill would also expand federal whistleblower protections.

“The middle class built this country,” Biden said. “And unions build the middle class.”

Biden proposes ARPA-H, a health research agency to ‘end cancer’ modeled after DARPA

By Taylor Hatmaker

In a joint address to Congress last night, President Biden updated the nation on vaccination efforts and outlined his administration’s ambitious goals.

Biden’s first 100 days have been characterized by sweeping legislative packages that could lift millions of Americans out of poverty and slow the clock on the climate crisis, but during his first joint address to Congress, the president highlighted another smaller plan that’s no less ambitious: to “end cancer as we know it.”

“I can think of no more worthy investment,” Biden said Wednesday night. “I know of nothing that is more bipartisan…. It’s within our power to do it.”

The comments weren’t out of the blue. Earlier this month, the White House released a budget request for $6.5 billion to launch a new government agency for breakthrough health research. The proposed health agency would be called ARPA-H and would live within the NIH. The initial focus would be on cancer, diabetes and Alzheimer’s but the agency would also pursue other “transformational innovation” that could remake health research.

The $6.5 billion investment is a piece of the full $51 billion NIH budget. But some critics believe that ARPA-H should sit under the Department of Health and Human Services rather than being nested under NIH. 

ARPA-H would be modeled after the Defense Advanced Research Projects Agency (DARPA), which develops moonshot-like tech for defense applications. DARPA’s goals often sound more like science fiction than science, but the agency contributed to or created a number of now ubiquitous technologies, including a predecessor to GPS and most famously ARPANET, the computer network that grew into the modern internet.

Unlike more conservative, incremental research teams, DARPA aggressively pursues major scientific advances in a way that shares more in common with Silicon Valley than it does with other governmental agencies. Biden believes that using the DARPA model on cutting edge health research would keep the U.S. from lagging behind in biotech.

“China and other countries are closing in fast,” Biden said during the address. “We have to develop and dominate the products and technologies of the future: advanced batteries, biotechnology, computer chips, and clean energy.”

RapidDeploy raises $29M for a cloud-based dispatch platform aimed at 911 centers

By Ingrid Lunden

The last year of pandemic living has been real-world, and sometimes harrowing, proof of how important it can be to have efficient and well-equipped emergency response services in place. They can help people remotely if need be, and when they cannot, they make sure that in-person help can be dispatched quickly in medical and other situations. Today, a company that’s building cloud-based tools to help with this process is announcing a round of funding as it continues to grow.

RapidDeploy, which provides computer-aided dispatch technology as a cloud-based service for 911 centers, has closed a round of $29 million, a Series B round of funding that will be used both to grow its business, and to continue expanding the SaaS tools that it provides to its customers. In the startup’s point of view, the cloud is essential to running emergency response in the most efficient manner.

“911 response would have been called out on a walkie talkie in the early days,” said Steve Raucher, the co-founder and CEO of RapidDeploy, in an interview. “Now the cloud has become the nexus of signals.”

Washington, DC-based RapidDeploy provides data and analytics to 911 centers — the critical link between people calling for help and connecting those calls with the nearest medical, police or fire assistance — and today it has about 700 customers using its RadiusPlus, Eclipse Analytics and Nimbus CAD products.

That works out to about 10% of all 911 centers in the US (7,000 in total), and covering 35% of the population (there are more centers in cities and other dense areas). Its footprint includes state coverage in Arizona, California, and Kansas. It also has operations in South Africa, where it was originally founded.

The funding is coming from an interesting mix of financial and strategic investors. Led by Morpheus Ventures, the round also had participation from GreatPoint Ventures, Ericsson Ventures, Samsung Next Ventures, Tao Capital Partners, Tau Ventures, among others. It looks like the company had raised about $30 million before this latest round, according to PitchBook data. Valuation is not being disclosed.

Ericsson and Samsung, as major players in the communication industry, have a big stake in seeing through what will be the next generation of communications technology and how it is used for critical services. (And indeed, one of the big leaders in legacy and current 911 communications is Motorola, a would-be competitor of both.) AT&T is also a strategic go-to-market (distribution and sales) partner of RapidDeploy’s, and it also has integrations with Apple, Google, Microsoft, and OnStar to feed data into its system.

The business of emergency response technology is a fragmented market. Raucher describes them as “mom-and-pop” businesses, with some 80% of them occupying four seats or less (a testament to the fact that a lot of the US is actually significantly less urban than its outsized cities might have you think it is), and in many cases a lot of these are operating on legacy equipment.

However, in the US in the last several years — buffered by innovations like the Jedi project and FirstNet, a next-generation public safety network — things have been shifting. RapidDeploy’s technology sits alongside (and in some areas competes with) companies like Carbyne and RapidSOS, which have been tapping into the innovations of cell phone technology both to help pinpoint people and improve how to help them.

RapidDeploy’s tech is based around its RadiusPlus mapping platform, which uses data from smart phones, vehicles, home security systems and other connected devices and channels it to its data stream, which can help a center determine not just location but potentially other aspects of the condition of the caller. Its Eclipse Analytics services, meanwhile, are meant to act as a kind of assistant to those centers to help triage situations and provide insights into how to respond. The Nimbus CAD then helps figure out who to call out and routing for response. 

Longer term, the plan will be to leverage cloud architecture to bring in new data sources and ways of communicating between callers, centers and emergency care providers.

“It’s about being more of a triage service rather than a message switch,” Raucher said. “As we see it, the platform will evolve with customers’ needs. Tactical mapping ultimately is not big enough to cover this. We’re thinking about unified communications.” Indeed, that is the direction that many of these services seem to be going, which can only be a good thing for us consumers.

“The future of emergency services is in data, which creates a faster, more responsive 9-1-1 center,” said Mark Dyne, Founding Partner at Morpheus Ventures, in a statement. “We believe that the platform RapidDeploy has built provides the necessary breadth of capabilities that make the dream of Next-Gen 9-1-1 service a reality for rural and metropolitan communities across the nation and are excited to be investing in this future with Steve and his team.” Dyne has joined the RapidDeploy board with this round.

The next tech hearing targets social media algorithms — and YouTube, for once

By Taylor Hatmaker

Another week, another big tech hearing in Congress. With a flurry of antitrust reform bills on the way, Democratic lawmakers are again bringing in some of the world’s most powerful tech companies for questioning.

In the next hearing, scheduled for Tuesday, April 27 at 10 AM ET, the Senate Judiciary’s subcommittee on privacy and technology will zero in on concerns about algorithmic amplification. Specifically, the hearing will explore how algorithms amplify dangerous content and shape user behavior on social platforms.

The subcommittee’s chair Sen. Chris Coons previously indicated that he would bring in tech CEOs, but Tuesday’s hearing will instead feature testimony from policy leads at Facebook, Twitter and YouTube.

The hearing might prove a unique opportunity to hold YouTube’s feet to the fire. In spite of being one of the biggest social networks in the world — one without much transparency about its regular failures to control extremism and misinformation — YouTube seldom winds up under the microscope with Congress. The company will be represented by Alexandra Veitch, YouTube’s regional director of public policy.

In past big tech hearings, Google CEO Sundar Pichai has generally appeared on behalf of YouTube’s parent company while YouTube’s chief executive Susan Wojcicki inexplicably escapes scrutiny. Google is a massive entity and concerns specific to YouTube and its policies generally get lost in the mix, with lawmakers usually going after Pichai for concerns around Google’s search and ads businesses.

In a stylistic repeat of last week’s adversarial app store hearing, which featured Apple as well as some of its critics, misinformation researcher Dr. Joan Donovan and ex-Googler and frequent big tech critic Tristan Harris will also testify Tuesday. That tension can create deeper questioning, providing outside expertise that can fill in some lapses in lawmakers’ technical knowledge.

Policy leads at these companies might not make the same flashy headlines, but given their intimate knowledge of the content choices these companies make every day, they do provide an opportunity for more substance. Tech CEOs like Mark Zuckerberg and Jack Dorsey have been dragged in to so many hearings at this point that they begin to run together, and the top executives generally reveal very little while sometimes playing dumb about the day-to-day decision making on their platforms. The subcommittee’s ranking member Ben Sasse (R-NE) emphasized that point, stating that the hearing would be a learning opportunity and not a “show hearing.”

Democrats have been sounding the alarm on algorithms for some time. While Republicans spent the latter half of the Trump administration hounding tech companies about posts they remove, Democrats instead focused on the violent content, extremism and sometimes deadly misinformation that gets left up and even boosted by the secretive algorithms tech companies rarely shed light on.

We haven’t seen much in the way of algorithmic transparency, but that could change. One narrowly targeted Section 230 reform bill in the House would strip that law’s protections from large companies when their algorithms amplify extremism or violate civil rights.

Twitter CEO Jack Dorsey has also hinted that a different approach might be on the horizon, suggesting that users could hand-pick their preferred algorithms in the future, possibly even selecting them from a kind of third-party marketplace. Needless to say, Facebook didn’t indicate any plans to give its own users more algorithmic control.

With any major changes to the way platforms decide who sees what likely a long ways off, expect to see lawmakers try to pry open some black boxes on Tuesday.

With a third of its capital deployed, Risk & Return is transforming how we think about emergency response

By Danny Crichton

Disasters are, unfortunately, a growth business, and the frontlines that were once distant have moved much closer to home. Wildfires, hurricanes, floods, tornadoes — let alone a pandemic — has forced much of the United States and increasingly large swaths of the world to confront a new reality: few places are existentially secure.

How we respond to crises can radically adjust the ledger of mortality for the people slammed by these catastrophes. Good information, fast response, and strong execution can mean the difference between life and death. Yet, frontline workers often can’t get the tools and training they need, particularly new innovations that may not wind their way easily through the government supply chain. Perhaps most importantly, they often need post-traumatic care far after a disaster his dissipated.

Risk & Return is a unique venture fund and philanthropic hybrid that has set its mission to seek and finance the next-generation of technologies to help first responders not only on the frontlines, but even after as they confront the strains both physical and mental from missions they undertake.

The family of organizations sees a spectrum from emergency workers in the United States to U.S. military veterans, all of whom share similar challenges and need solutions today — solutions that can often be hard to finance for traditional VCs who aren’t aware of the unique needs of this community.

The group was founded by Robert Nelsen, who made his name as a co-founder and managing director of biotech VC leader ARCH Venture Partners, which last year announced a $1.5 billion pair of funds. He’s joined by board chairman Bob Kerrey, the former co-chair of the 9/11 Commission as well as former governor and senator of Nebraska, and managing director Jeff Eggers, a Navy SEAL who served as senior director of Afghanistan and Pakistan on President Barack Obama’s National Security Council.

Nelsen had been thinking through the idea when he met Kerrey, who recalled the conversation happening during a fundraising event for Navy SEALs. “There has been a lot of suffering for those who have been on the frontlines,” Kerrey said. “Bob had this idea, and I thought it was a really smart idea, to try to take a different approach to philanthropic efforts.” They linked up with Eggers and the trio brought Risk & Return to fruition.

The venture fund is $25 million, with about 35% of it already deployed. The fund has had a big emphasis on mental health for first responders, with 75% of the companies funded broadly in that category.

The fund’s first investment was into Alto Neuroscience, which is developing precision medicine tools to treat post-traumatic stress. The fund has also invested in behavioral management startup NeuroFlow; alternative well-being assessment tool Qntfy; Spear Human Performance, which is a brand-new spinout focused on connecting commercial and health data sources to optimize human performance; and Xtremity, which is designing better connection sockets for prosthetics. The fund has invested in another six startups including Perimeter, which I profiled a few weeks ago.

This isn’t your typical venture portfolio, and that’s exactly what Risk & Return wants to focus on. Eggers said that “We love that type of technology since it has that dual purpose: going to serve the first responder on the ground, but the community is also going to benefit.”

While many of the startups the firm has invested in obviously have a focus on first responders, the technologies they develop don’t have to be limited to just that market. Kerrey noted that “Every veteran is a civilian, [and] these aren’t businesses targeting the military market.” Given the last year, “it’s hard to find a human being in this pandemic that hasn’t suffered at least some PTSD,” referencing post-traumatic stress disorder. Sales to governments can be incredibly challenging, and the ultimate market for the kinds of specialized mental health services that frontline workers need may not be as commercially viable as one would hope.

While the government does research and innovation in this category, Kerrey sees a huge opportunity for the private sector to get more involved. “One thing that you could do in the private sector that is difficult in the public sector is look for alternative therapies for PTSD,” he said, noting that areas like psychedelics have intrigued the private sector even while the government would mostly not touch the category today. Risk & Return has not made an investment in that space at this time though.

Half of the returns from the fund will stream into Risk & Return’s philanthropic arm, which writes grants to charities along the same thesis of aiding frontline workers both on the job and after it. The organizations hope that by approaching the complicated response space with a multi-pronged approach, they can match potential needs with different sources of capital that are most appropriate.

We’ve increasingly seen this hybrid for-profit/non-profit venture model in other areas. Norrsken is a Swedish foundation and venture fund that is investing in areas like mental health, climate change, and other categories from the UN Sustainable Development Goals. MIT Solve is another program that is working on hybrid approaches to startup innovation, such as in pandemics and health security. While disasters are always looming, it’s great to see more innovation in financing this critical category of technology, such as in pandemics and health security. 

India orders Twitter and Facebook to take down posts critical of its coronavirus handling

By Manish Singh

Twitter and Facebook have taken down about 100 posts in India, some of which were critical of New Delhi’s handling of the coronavirus, to comply with an emergency order from the Indian government at a time when South Asian nation is grappling with a globally unprecedented surge in Covid cases.

New Delhi made an emergency order to Twitter and Facebook to censor over 100 posts in the country. Twitter disclosed the government order on Lumen database, a Harvard University project. The microblogging network and Facebook complied with the request, and withheld those posts from users in India.

TechCrunch reported on Saturday that Twitter was not the only platform affected by the new order. Facebook, which identifies India as its largest market by users, didn’t immediately respond to a request for comment Saturday.

The Indian government confirmed on Sunday that it ordered Facebook and Instagram and Twitter to take down posts that it deemed posed potential to incite panic among the public, hinder its efforts to contain the pandemic, or were simply misleading.

(Credit where it’s due: Twitter is one of the handful of companies that timely discloses takedown actions and also shares who made those requests.)

The world’s second largest nation — which has also previously ordered Twitter to block some tweets and accounts critical of its policies and threatened jail time to employees in the event of non-compliance — comes as the country reports a record of over 330,000 new Covid cases a day, the worst by any country. Multiple news reports, doctors, and academicians say that even these Covid figures, as alarmingly high as they are, are underreported.

Amid an unprecedented collapse of the nation’s health infrastructure, Twitter has become a rare beam of hope in what it describes as one of its “priority markets” as people crowdsource data to help one another find medicines and availability of beds and oxygen supplies.

A copy of one of Indian government’s orders disclosed by Twitter. (Lumen database)

Policy-focused Indian news outlet Medianama, which first reported on New Delhi’s new order Friday, said among those whose tweets have been censored in India include high profile public figures such as Revanth Reddy (a Member of Parliament), Moloy Ghatak (a minister in West Bengal), Vineet Kumar Singh (actor) filmmakers Vinod Kapri and Avinash Das.

In a statement, a Twitter spokesperson told TechCrunch, “When we receive a valid legal request, we review it under both the Twitter Rules and local law. If the content violates Twitter’s Rules, the content will be removed from the service. If it is determined to be illegal in a particular jurisdiction, but not in violation of the Twitter Rules, we may withhold access to the content in India only. In all cases, we notify the account holder directly so they’re aware that we’ve received a legal order pertaining to the account.”

“We notify the user(s) by sending a message to the email address associated with the account(s), if available. Read more about our Legal request FAQs.  The legal requests that we receive are detailed in the bianual Twitter Transparency Report, and requests to withhold content are published on Lumen.”

India has become one of the key markets for several global technology giants as they look to accelerate their userbase growth and make long-term bets. But India, once the example of an ideal open market, has also proposed or enforced several rules in the country in recent years under Prime Minister Narendra Modi’s leadership that in some ways arguably makes it difficult for American firms to keep expanding in the South Asian market without compromising on some of the values that users in their home market take for granted.

The story and the headline were updated to incorporate details from the Indian government’s statement.

India restricts American Express from adding new customers for violating data storage rules

By Manish Singh

India’s central bank has restricted American Express and Diners Club from adding new customers starting next month, it said Friday citing violation of local data-storage rules.

In a statement, the Reserve Bank of India said existing customers of either of the two card companies will not be impacted by the new order, which goes into effect May 1.

This is the first time India’s central bank has penalized any firm for non-compliance with local data storage rules, which was unveiled in 2018. The rules require payments firms to store all Indian transaction data within servers in the country.

Visa, Mastercard, and several other firms, as well as the U.S. government, have previously requested New Delhi to reconsider its rules, which is designed to allow the regulator “unfettered supervisory access.”

Visa, Mastercard, and American Express had also lobbied to either significantly change the rules or completely discard it. But after none of those efforts worked, most firms began to comply.

In a statement Friday evening (local time), an Amex spokesperson said the company was “disappointed that the RBI has this course of action,” but said was working with the authority to resolve the concerns “as quickly as possible.”

With about 1.5 million customers, American Express has amassed the highest number of customers among foreign banks in India.

“We have been in regular dialogue with the Reserve Bank of India about data localization requirements and have demonstrated our progress towards complying with the regulation. […] This does not impact the services that we offer to our existing customers in India, and our customers can continue to use and accept our cards as normal.”

Diners Club, which is owned by Discover Financial Services and offers credit cards in India through a partnership with the nation’s largest private sector bank (HDFC), said in a statement that India remains an important market for the firm and it is working with the central bank to reach a resolution so that it can “continue to grow in the country.”

Last year, India’s central bank ordered HDFC Bank to not add new credit customers or launch digital businesses after the bank’s services were hit by a power outage.

Friday’s order comes as Citigroup, another key foreign bank in India, has announced plans to exit most of its Asian consumer business as it looks to boost its profitability. The consumer operations of the bank in 13 countries is up for sale.

To ensure inclusivity, the Biden administration must double down on AI development initiatives

By Ram Iyer
Miriam Vogel Contributor
Miriam Vogel is the president and CEO of EqualAI, a nonprofit organization focused on reducing unconscious bias in artificial intelligence.
More posts by this contributor

The National Security Commission on Artificial Intelligence (NSCAI) issued a report last month delivering an uncomfortable public message: America is not prepared to defend or compete in the AI era. It leads to two key questions that demand our immediate response: Will the U.S. continue to be a global superpower if it falls behind in AI development and deployment? And what can we do to change this trajectory?

Left unchecked, seemingly neutral artificial intelligence (AI) tools can and will perpetuate inequalities and, in effect, automate discrimination. Tech-enabled harms have already surfaced in credit decisions, health care services, and advertising.

To prevent this recurrence and growth at scale, the Biden administration must clarify current laws pertaining to AI and machine learning models — both in terms of how we will evaluate use by private actors and how we will govern AI usage within our government systems.

The administration has put a strong foot forward, from key appointments in the tech space to issuing an Executive Order on the first day in office that established an Equitable Data Working Group. This has comforted skeptics concerned both about the U.S. commitment to AI development and to ensuring equity in the digital space.

But that will be fleeting unless the administration shows strong resolve in making AI funding a reality and establishing leaders and structures necessary to safeguard its development and use.

Need for clarity on priorities

There has been a seismic shift at the federal level in AI policy and in stated commitments to equality in tech. A number of high profile appointments by the Biden administration — from Dr. Alondra Nelson as Deputy of OSTP, to Tim Wu at the NEC, to (our former senior advisor) Kurt Campbell at the NSC — signal that significant attention will be paid to inclusive AI development by experts on the inside.

The NSCAI final report includes recommendations that could prove critical to enabling better foundations for inclusive AI development, such as creating new talent pipelines through a U.S. Digital Service Academy to train current and future employees.

The report also recommends establishing a new Technology Competitiveness Council led by the Vice President. This could prove essential in ensuring that the nation’s commitment to AI leadership remains a priority at the highest levels. It makes good sense to have the administration’s leadership on AI spearheaded by VP Harris in light of her strategic partnership with the President, her tech policy savvy and her focus on civil rights.

The U.S. needs to lead by example

We know AI is powerful in its ability to create efficiencies, such as plowing through thousands of resumes to identify potentially suitable candidates. But it can also scale discrimination, such as the Amazon hiring tool that prioritized male candidates or “digital redlining” of credit based on race.

The Biden administration should issue an Executive Order (EO) to agencies inviting ideation on ways AI can improve government operations. The EO should also mandate checks on AI used by the USG to ensure it’s not spreading discriminatory outcomes unintentionally.

For instance, there must be a routine schedule in place where AI systems are evaluated to ensure embedded, harmful biases are not resulting in recommendations that are discriminatory or inconsistent with our democratic, inclusive values — and reevaluated routinely given that AI is constantly iterating and learning new patterns.

Putting a responsible AI governance system in place is particularly critical in the U.S. Government, which is required to offer due process protection when denying certain benefits. For instance, when AI is used to determine allocation of Medicaid benefits, and such benefits are modified or denied based on an algorithm, the government must be able to explain that outcome, aptly termed technological due process.

If decisions are delegated to automated systems without explainability, guidelines and human oversight, we find ourselves in the untenable situation where this basic constitutional right is being denied.

Likewise, the administration has immense power to ensure that AI safeguards by key corporate players are in place through its procurement power. Federal contract spending was expected to exceed $600 billion in fiscal 2020, even before including pandemic economic stimulus funds. The USG could effectuate tremendous impact by issuing a checklist for federal procurement of AI systems — this would ensure the government’s process is both rigorous and universally applied, including relevant civil rights considerations.

Protection from discrimination stemming from AI systems

The government holds another powerful lever to protect us from AI harms: its investigative and prosecutorial authority. An Executive Order instructing agencies to clarify applicability of current laws and regulations (e.g., ADA, Fair Housing, Fair Lending, Civil Rights Act, etc.) when determinations are reliant on AI-powered systems could result in a global reckoning. Companies operating in the U.S. would have unquestionable motivation to check their AI systems for harms against protected classes.

Low-income individuals are disproportionately vulnerable to many of the negative effects of AI. This is especially apparent with regard to credit and loan creation, because they are less likely to have access to traditional financial products or the ability to obtain high scores based on traditional frameworks. This then becomes the data used to create AI systems that automate such decisions.

The Consumer Finance Protection Bureau (CFPB) can play a pivotal role in holding financial institutions accountable for discriminatory lending processes that result from reliance on discriminatory AI systems. The mandate of an EO would be a forcing function for statements on how AI-enabled systems will be evaluated, putting companies on notice and better protecting the public with clear expectations on AI use.

There is a clear path to liability when an individual acts in a discriminatory way and a due process violation when a public benefit is denied arbitrarily, without explanation. Theoretically, these liabilities and rights would transfer with ease when an AI system is involved, but a review of agency action and legal precedent (or rather, the lack thereof) indicates otherwise.

The administration is off to a good start, such as rolling back a proposed HUD rule that would have made legal challenges against discriminatory AI essentially unattainable. Next, federal agencies with investigative or prosecutorial authority should clarify which AI practices would fall under their review and current laws would be applicable — for instance, HUD for illegal housing discrimination; CFPB on AI used in credit lending; and the Department of Labor on AI used in determinations made in hiring, evaluations and terminations.

Such action would have the added benefit of establishing a useful precedent for plaintiff actions in complaints.

The Biden administration has taken encouraging first steps signaling its intent to ensure inclusive, less discriminatory AI. However, it must put its own house in order by directing that federal agencies require the development, acquisition and use of AI — internally and by those it does business with — is done in a manner that protects privacy, civil rights, civil liberties, and American values.

RapidSOS and Axon ink deal to give better real-time information to emergency responders

By Danny Crichton

Every time an emergency responder or police officer responds to a 911 dispatch, they enter an unknown terrain. What’s the incident? Who’s involved? Is anyone dangerous or holding a weapon? Is someone injured and perhaps has an underlying health condition that the responders need to know about? As prominent news stories this week and over the last few years constantly remind us, having the right context while responding can turn a potential tragedy into a much more positive story.

RapidSOS is a startup I’ve been watching for years. The company raised an $85 million Series C round this February to bring real-time location information from all sorts of devices — from Apple and Android smartphones to Sirius XM satellite radios — into the hands of 911 call centers when users make an emergency call. Accurate location can help dispatchers send responders to exactly the right place, offering faster assistance and therefore saving lives.

The company announced this morning a new partnership with Axon, the company behind Taser, the electroshock weapon designed as a non-lethal alternative to traditional firearms, and a variety of body cams and other technologies for public safety officials. In recent years, Axon has increasingly emphasized a suite of cloud offerings that can fuse data from its devices with software to creates operations systems for public safety agencies.

Through the partnership, Axon will integrate the data that its devices generate such as body cam footage and Taser discharge alerts into RapidSOS’s Jurisdiction Review, which is used by dispatchers to place a location and relevant information from a caller a visual map. For instance, a dispatcher might now know the location of police or medical responders, and be able to update a 911 caller on the estimated time of arrival or whether they need help getting access to a location.

Likewise, RapidSOS’s location, medical, and other information that it pulls in from user devices during an emergency call will be sent to Axon Respond devices. Frontline responders will therefore have direct access to a 911 caller’s location information or medical information if they have a profile setup, without having to wait for a dispatcher to route those facts to them.

Josh Pepper, VP of product management at Axon, said “What we’re always trying to do is how can we get [first responders] the right information about the incident, the right information about the people involved, the right information about the location and all of the disposition of the units involved, as fast and as accurately as we can … so that they can have situational awareness of what’s happening.” RapidSOS’s data will augment other information streams, helping first responders make those critical split-second decisions.

Michael Martin, CEO and co-foudner of RapidSOS, said “for the first time now, your smartphone, your 911 responder and the police officer in the field can all simultaneously and transparently share data with each other.”

In tech, we are used to having comprehensive information about our products through data analytics. In the emergency space — even today — first responders can lack even the most rudimentary information like location when responding to a call. RapidSOS, Axon and a slew of other companies are trying to bridge that digital divide.

A UI mockup of how Axon’s information will display within RapidSOS’s Jurisdiction View. Image Credits: RapidSOS

This is the Jurisdiction View from RapidSOS’s platform, with a few elements added to mockup how Axon’s information will be integrated into the product. The two starred badges represent the locations of responding police officers in the field, converging on the location (green pin) of a 911 caller. In the bottom-right corner, a live body cam feed from a police officer can be routed straight to a 911 dispatcher, giving them a real-time look at what is transpiring on the ground. Meanwhile in the info box to the left, we can see that a Taser weapon was fired (noted under “Device Alerts”) and the 911 dispatcher can text to the responding officer directly through the platform.

The companies said the partnership will bear fruit this year as both platforms integrate the data streams into their respective products.

Lina Khan’s timely tech skepticism makes for a refreshingly friendly FTC confirmation hearing

By Devin Coldewey

One never knows how a confirmation hearing will go these days, especially one for a young outsider nominated to an important position despite challenging the status quo and big business. Lina Khan, just such a person up for the position of FTC Commissioner, had a surprisingly pleasant time of it during today’s Senate Commerce Committee confirmation hearing — possibly because her iconoclastic approach to antitrust makes for good politics these days.

Khan, an associate professor of law at Columbia, is best known in the tech community for her incisive essay “Amazon Antitrust’s Paradox,” which laid out the failings of regulatory doctrine that have allowed the retail giant to progressively dominate more and more markets. (She also recently contributed to a House report on tech policy.)

When it was published, in 2018, the feeling that Amazon had begun to abuse its position was, though commonplace in some circles, not really popular in the Capitol. But the growing sense that laissez-faire or insufficient regulations have created monsters in Amazon, Google, and Facebook (to start) has led to a rare bipartisan agreement that we must find some way, any way will do, of putting these upstart corporations back in their place.

This in turn led to a sense of shared purpose and camaraderie in the confirmation hearing, which was a triple header: Khan joined Bill Nelson, nominated to lead NASA, and Leslie Kiernan, who would join the Commerce Department as General Counsel, for a really nice little three-hour chat.

Khan is one of several in the Biden administration who signal a new approach to taking on Big Tech and other businesses that have gotten out of hand, and the questions posed to her by Senators from both sides of the aisle seemed genuine and got genuinely satisfactory answers from a confident Khan.

She deftly avoided a few attempts to bait her — including one involving Section 230; wrong Commission, Senator — and her answers primarily reaffirmed her professional opinion that the FTC should be better informed and more preemptive in its approach to regulating these secretive, powerful corporations.

Here are a few snippets representative of the questioning and indicative of her positions on a few major issues (answers lightly edited for clarity):

On the FTC getting involved in the fight between Google, Facebook, and news providers:

“Everything needs to be on the table. Obviously local journalism is in crisis, and i think the current COVID moment has really underscored the deep democratic emergency that is resulting when we don’t have reliable sources of local news.”

She also cited the increasing concentration of ad markets and the arbitrary nature of, for example, algorithm changes that can have wide-ranging effects on entire industries.

Lina Khan, commissioner of the Federal Trade Commission (FTC) nominee for U.S. President Joe Biden, speaks during a Senate Commerce, Science and Transportation Committee confirmation hearing in Washington, D.C.

Image Credits: Graeme Jennings/Washington Examiner/Bloomberg / Getty Images

On Clarence Thomas’s troubling suggestion that social media companies should be considered “common carriers”:

“I think it prompted a lot of interesting discussion,” she said, very diplomatically. “In the Amazon article, I identified two potential pathways forward when thinking about these dominant digital platforms. One is enforcing competition laws and ensuring that these markets are competitive.” (i.e. using antitrust rules)

“The other is, if we instead recognize that perhaps there are certain economies of scale, network externalities that will lead these markets to stay dominated by a very few number of companies, then we need to apply a different set of rules. We have a long legal tradition of thinking about what types of checks can be applied when there’s a lot of concentration and common carriage is one of those tools.”

“I should clarify that some of these firms are now integrated in so many markets that you may reach for a different set of tools depending on which specific market you’re looking at.”

 

(This was a very polite way of saying common carriage and existing antitrust rules are totally unsuitable for the job.)

On potentially reviewing past mergers the FTC approved:

“The resources of the commission have not really kept pace with the increasing size of the economy, as well as the increasing size and complexity of the deals the commission is reviewing.”

“There was an assumption that digital markets in particular are fast moving so we don’t need to be concerned about potential concentration in the markets, because any exercise of power will get disciplined by entry and new competition. Now of course we know that in the markets you actually have significant network externalities in ways that make them more sticky. In hindsight there’s a growing sense that those merger reviews were a missed opportunity.”

(Here Senator Blackburn (R-TN) in one of the few negative moments fretted about Khan’s “lack of experience in coming to that position” before asking about a spectrum plan — wrong Commission, Senator.)

On the difficulty of enforcing something like an order against Facebook:

“One of the challenges is the deep information asymmetry that exists between some of these firms and enforcers and regulators. I think it’s clear that in some instances the agencies have been a little slow to catch up to the underlying business realities and the empirical realities of how these markets work. So at the very least ensuring the agencies are doing everything they can to keep pace is gonna be important.”

“In social media we have these black box algorithms, proprietary algorithms that can sometimes make it difficult to know what’s really going on. The FTC needs to be using its information gathering capacities to mitigate some of these gaps.”

On extending protections for children and other vulnerable groups online:

Some of these dangers are heightened given some of the ways in which the pandemic has rendered families and children especially dependent on some of these [education] technologies. So I think we need to be especially vigilant here. The previous rules should be the floor, not the ceiling.


Overall there was little partisan bickering and a lot of feeling from both sides that Khan was, if not technically experienced at the job (not rare with a coveted position like FTC Commissioner), about as competent a nominee as anyone could ask for. Not only that but her highly considered and fairly assertive positions on matters of antitrust and competition could help put Amazon and Google, already in the regulatory doghouse, on the defensive for once.

New privacy bill would end law enforcement practice of buying data from brokers

By Taylor Hatmaker

A new bill known as the Fourth Amendment is Not for Sale Act would seal up a loophole that intelligence and law enforcement agencies use to obtain troves of sensitive and identifying information to which they wouldn’t otherwise have legal access.

The new legislation, proposed by Senators Ron Wyden (D-OR) and Rand Paul (R-KY), would require government agencies to obtain a court order to access data from brokers. Court orders are already required when the government seeks analogous data from mobile providers and tech platforms.

“There’s no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider,” Wyden said. Wyden describes the loophole as a way that police and other agencies buy data to “end-run the Fourth Amendment.”

Paul criticized the government for using the current data broker loophole to circumvent Americans’ constitutional rights. “The Fourth Amendment’s protection against unreasonable search and seizure ensures that the liberty of every American cannot be violated on the whims, or financial transactions, of every government officer,” Paul said.

Critically, the bill would also ban law enforcement agencies from buying data on Americans when it was obtained through hacking, violations of terms of service or “from a user’s account or device.”

That bit highlights the questionable practices of Clearview AI, a deeply controversial tech company that sells access to a facial recognition search engine. Clearview’s platform collects pictures of faces scraped from across the web, including social media sites, and sells access to that data to police departments around the country and federal agencies like ICE.

In scraping their sites for data to sell, Clearview has run afoul of just about every major social media platform’s terms of service. Facebook, YouTube, Twitter, LinkedIn and Google have all denounced Clearview for using data culled from their services and some have even sent cease-and-desists ordering the data broker to stop.

The bill would also expand privacy laws to apply to infrastructure companies that own cell towers and data cables, seal up workarounds that allow intelligence agencies to obtain metadata from Americans’ international communications without review by a FISA court and ensure that agencies seek probable cause orders to obtain location and web browsing data.

The bill, embedded below, isn’t just some nascent proposal. It’s already attracted bipartisan support from a number of key co-sponsors, including Senate Majority Leader Chuck Schumer and Bernie Sanders on the Democratic side and Republicans Mike Lee and Steve Daines. A House version of the legislation was also introduced Wednesday.

 

Reform the US low-income broadband program by rebuilding Lifeline

By Annie Siebert
Rick Boucher Contributor
Rick Boucher was a Democratic member of the U.S. House for 28 years and chaired the House Energy and Commerce Committee's Subcommittee on Communications and the Internet. He is the honorary chairman of the Internet Innovation Alliance.

“If you build it, they will come” is a mantra that’s been repeated for more than three decades to embolden action. The line from “Field of Dreams” is a powerful saying, but I might add one word: “If you build it well, they will come.”

America’s Lifeline program, a monthly subsidy designed to help low-income families afford critical communications services, was created with the best intentions. The original goal was to achieve universal telephone service, but it has fallen far short of achieving its potential as the Federal Communications Commission has attempted to convert it to a broadband-centric program.

The FCC’s Universal Service Administrative Company estimates that only 26% of the families that are eligible for Lifeline currently participate in the program. That means that nearly three out of four low-income consumers are missing out on a benefit for which they qualify. But that doesn’t mean the program should be abandoned, as the Biden administration’s newly released infrastructure plan suggests.

Now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users.

Rather, now is the right opportunity to complete the transformation of Lifeline to broadband and expand its utilization by increasing the benefit to a level commensurate with the broadband marketplace and making the benefit directly available to end users. Instead, the White House fact sheet on the plan recommends price controls for internet access services with a phaseout of subsidies for low-income subscribers. That is a flawed policy prescription.

If maintaining America’s global competitiveness, building broadband infrastructure in high-cost rural areas, and maintaining the nation’s rapid deployment of 5G wireless services are national goals, the government should not set prices for internet access.

Forcing artificially low prices in the quest for broadband affordability would leave internet service providers with insufficient revenues to continue to meet the nation’s communications infrastructure needs with robust innovation and investment.

Instead, targeted changes to the Lifeline program could dramatically increase its participation rate, helping to realize the goal of connecting Americans most in need with the phone and broadband services that in today’s world have become essential to employment, education, healthcare and access to government resources.

To start, Lifeline program participation should be made much easier. Today, individuals seeking the benefit must go through a process of self-enrollment. Implementing “coordinated enrollment” — through which individuals would automatically be enrolled in Lifeline when they qualify for certain other government assistance benefits, including SNAP (the Supplemental Nutrition Assistance Program, formerly known as food stamps) and Medicaid — would help to address the severe program underutilization.

Because multiple government programs serve the same constituency, a single qualification process for enrollment in all applicable programs would generate government efficiencies and reach Americans who are missing out.

Speaking before the American Enterprise Institute back in 2014, former FCC Commissioner Mignon Clyburn said, “In most states, to enroll in federal benefit programs administered by state agencies, consumers already must gather their income-related documentation, and for some programs, go through a face-to-face interview. Allowing customers to enroll in Lifeline at the same time as they apply for other government benefits would provide a better experience for consumers and streamline our efforts.”

Second, the use of the Lifeline benefit can be made far simpler for consumers if the subsidy is provided directly to them via an electronic Lifeline benefit card account — like the SNAP program’s electronic benefit transfer (EBT) card. Not only would a Lifeline benefit card make participation in the program more convenient, but low-income

Americans would then be able to shop among the various providers and select the carrier and the precise service(s) that best suits their needs. The flexibility of greater consumer choice would be an encouragement for more program sign-ups.

And, the current Lifeline subsidy amount — $9.25 per month — isn’t enough to pay for a broadband subscription. For the subsidy to be truly meaningful, an increase in the monthly benefit is needed. Last December, Congress passed the temporary Emergency Broadband Benefit to provide low-income Americans up to a $50 per month discount ($75 per month on tribal lands) to offset the cost of broadband connectivity during the pandemic. After the emergency benefit runs out, a monthly benefit adequate to defray the cost of a broadband subscription will be needed.

In order to support more than a $9.25 monthly benefit, the funding source for the Lifeline program must also be reimagined. Currently, the program relies on the FCC’s Universal Service Fund, which is financed through a “tax” on traditional long-distance and international telephone services.

As greater use is made of the web for voice communications, coupled with less use of traditional telephones, the tax rate has increased to compensate for the shrinking revenues associated with landline phone services. A decade ago, the tax, known as the “contribution factor,” was 15.5%, but it’s now more than double that at an unsustainable 33.4%. Without changes, the problem will only worsen.

It’s easy to see that the financing of a broadband benefit should no longer be tied to a dying technology. Instead, funding for the Lifeline program could come from a “tax” shared across the entire internet ecosystem, including the edge providers that depend on broadband to reach their customers, or from direct congressional appropriations for the Lifeline program.

These reforms are realistic and straightforward. Rather than burn the program down, it’s time to rebuild Lifeline to ensure that it fulfills its original intention and reaches America’s neediest.

Sen. Wyden proposes limits on exportation of American’s personal data

By Devin Coldewey

Senator Ron Wyden (D-OR) has proposed a draft bill that would limit the types of information that could be bought and sold by tech companies abroad, and the countries it could be legally sold in. The legislation is imaginative and not highly specific, but it indicates growing concern at the federal level over the international data trade.

“Shady data brokers shouldn’t get rich selling Americans’ private data to foreign countries that could use it to threaten our national security,” said Sen. Wyden in a statement accompanying the bill. They probably shouldn’t get rich selling Americans’ private data at all, but national security is a good way to grease the wheels.

The Protecting Americans’ Data From Foreign Surveillance Act would be a first step toward categorizing and protecting consumer data as a commodity that’s traded on the global market. Right now there are few if any controls over what data specific to a person — buying habits, movements, political party — can be sold abroad.

This means that, for instance, an American data broker could sell the preferred brands and home addresses of millions of Americans to, say, a Chinese bank doing investment research. Some of this trade is perfectly innocuous, even desirable in order to promote global commerce, but at what point does it become dangerous or exploitative?

There isn’t any official definition of what should and shouldn’t be sold to whom, the way we limit sales of certain intellectual property or weapons. The proposed law would first direct the secretary of Commerce to identify the data we should be protecting and to whom it should be protected against.

The general shape of protected data would be that which “if exported by third parties, could harm U.S. national security.” The countries that would be barred from receiving it would be those with inadequate data protection and export controls, recent intelligence operations against the U.S. or laws that allow the government to compel such information to be handed over to them. Obviously this is aimed at the likes of China and Russia, though ironically the U.S. fits the bill pretty well itself.

There would be exceptions for journalism and First Amendment-protected speech, and for encrypted data — for example storing encrypted messages on servers in one of the targeted countries. The law would also create penalties for executives “who knew or should have known” that their company was illegally exporting data, and creates pathways for people harmed or detained in a foreign country owing to illegally exported data. That might be if, say, another country used an American facial recognition service to spot, stop and arrest someone before they left.

If this all sounds a little woolly, it is — but that’s more or less on purpose. It is not for Congress to invent such definitions as are necessary for a law like this one; that duty falls to expert agencies, which must conduct studies and produce reports that Congress can refer to. This law represents the first handful of steps along those lines: getting the general shape of things straight and giving fair warning that certain classes of undesirable data commerce will soon be illegal — with an emphasis on executive responsibility, something that should make tech companies take notice.

The legislation would need to be sensitive to existing arrangements by which companies spread out data storage and processing for various economic and legal reasons. Free movement of data is to a certain extent necessary for globe-spanning businesses that must interact with one another constantly, and to hobble those established processes with red tape or fees might be disastrous to certain locales or businesses. Presumably this would all come up during the studies, but it serves to demonstrate that this is a very complex, not to say delicate, digital ecosystem the law would attempt to modify.

We’re in the early stages of this type of regulation, and this bill is just getting started in the legislative process, so expect a few months at the very least before we hear anything more on this one.

China’s Xpeng in the race to automate EVs with lidar

By Rita Liao

Elon Musk famously said any company relying on lidar is “doomed.” Tesla instead believes automated driving functions are built on visual recognition and is even working to remove the radar. China’s Xpeng begs to differ.

Founded in 2014, Xpeng is one of China’s most celebrated electric vehicle startups and went public when it was just six years old. Like Tesla, Xpeng sees automation as an integral part of its strategy; unlike the American giant, Xpeng uses a combination of radar, cameras, high-precision maps powered by Alibaba, localization systems developed in-house, and most recently, lidar to detect and predict road conditions.

“Lidar will provide the 3D drivable space and precise depth estimation to small moving obstacles even like kids and pets, and obviously, other pedestrians and the motorbikes which are a nightmare for anybody who’s working on driving,” Xinzhou Wu, who oversees Xpeng’s autonomous driving R&D center, said in an interview with TechCrunch.

“On top of that, we have the usual radar which gives you location and speed. Then you have the camera which has very rich, basic semantic information.”

Xpeng is adding lidar to its mass-produced EV model P5, which will begin delivering in the second half of this year. The car, a family sedan, will later be able to drive from point A to B based on a navigation route set by the driver on highways and certain urban roads in China that are covered by Alibaba’s maps. An older model without lidar already enables assisted driving on highways.

The system, called Navigation Guided Pilot, is benchmarked against Tesla’s Navigate On Autopilot, said Wu. It can, for example, automatically change lanes, enter or exit ramps, overtake other vehicles, and maneuver another car’s sudden cut-in, a common sight in China’s complex road conditions.

“The city is super hard compared to the highway but with lidar and precise perception capability, we will have essentially three layers of redundancy for sensing,” said Wu.

By definition, NGP is an advanced driver-assistance system (ADAS) as drivers still need to keep their hands on the wheel and take control at any time (Chinese laws don’t allow drivers to be hands-off on the road). The carmaker’s ambition is to remove the driver, that is, reach Level 4 autonomy two to four years from now, but real-life implementation will hinge on regulations, said Wu.

“But I’m not worried about that too much. I understand the Chinese government is actually the most flexible in terms of technology regulation.”

The lidar camp

Musk’s disdain for lidar stems from the high costs of the remote sensing method that uses lasers. In the early days, a lidar unit spinning on top of a robotaxi could cost as much as $100,000, said Wu.

“Right now, [the cost] is at least two orders low,” said Wu. After 13 years with Qualcomm in the U.S., Wu joined Xpeng in late 2018 to work on automating the company’s electric cars. He currently leads a core autonomous driving R&D team of 500 staff and said the force will double in headcount by the end of this year.

“Our next vehicle is targeting the economy class. I would say it’s mid-range in terms of price,” he said, referring to the firm’s new lidar-powered sedan.

The lidar sensors powering Xpeng come from Livox, a firm touting more affordable lidar and an affiliate of DJI, the Shenzhen-based drone giant. Xpeng’s headquarters is in the adjacent city of Guangzhou about 1.5 hours’ drive away.

Xpeng isn’t the only one embracing lidar. Nio, a Chinese rival to Xpeng targeting a more premium market, unveiled a lidar-powered car in January but the model won’t start production until 2022. Arcfox, a new EV brand of Chinese state-owned carmaker BAIC, recently said it would be launching an electric car equipped with Huawei’s lidar.

Musk recently hinted that Tesla may remove radar from production outright as it inches closer to pure vision based on camera and machine learning. The billionaire founder isn’t particularly a fan of Xpeng, which he alleged owned a copy of Tesla’s old source code.

In 2019, Tesla filed a lawsuit against Cao Guangzhi alleging that the former Tesla engineer stole trade secrets and brought them to Xpeng. XPeng has repeatedly denied any wrongdoing. Cao no longer works at Xpeng.

Supply challenges

While Livox claims to be an independent entity “incubated” by DJI, a source told TechCrunch previously that it is just a “team within DJI” positioned as a separate company. The intention to distance from DJI comes as no one’s surprise as the drone maker is on the U.S. government’s Entity List, which has cut key suppliers off from a multitude of Chinese tech firms including Huawei.

Other critical parts that Xpeng uses include NVIDIA’s Xavier system-on-the-chip computing platform and Bosch’s iBooster brake system. Globally, the ongoing semiconductor shortage is pushing auto executives to ponder over future scenarios where self-driving cars become even more dependent on chips.

Xpeng is well aware of supply chain risks. “Basically, safety is very important,” said Wu. “It’s more than the tension between countries around the world right now. Covid-19 is also creating a lot of issues for some of the suppliers, so having redundancy in the suppliers is some strategy we are looking very closely at.”

Taking on robotaxis

Xpeng could have easily tapped the flurry of autonomous driving solution providers in China, including Pony.ai and WeRide in its backyard Guangzhou. Instead, Xpeng becomes their competitor, working on automation in-house and pledges to outrival the artificial intelligence startups.

“The availability of massive computing for cars at affordable costs and the fast dropping price of lidar is making the two camps really the same,” Wu said of the dynamics between EV makers and robotaxi startups.

“[The robotaxi companies] have to work very hard to find a path to a mass-production vehicle. If they don’t do that, two years from now, they will find the technology is already available in mass production and their value become will become much less than today’s,” he added.

“We know how to mass-produce a technology up to the safety requirement and the quarantine required of the auto industry. This is a super high bar for anybody wanting to survive.”

Xpeng has no plans of going visual-only. Options of automotive technologies like lidar are becoming cheaper and more abundant, so “why do we have to bind our hands right now and say camera only?” Wu asked.

“We have a lot of respect for Elon and his company. We wish them all the best. But we will, as Xiaopeng [founder of Xpeng] said in one of his famous speeches, compete in China and hopefully in the rest of the world as well with different technologies.”

5G, coupled with cloud computing and cabin intelligence, will accelerate Xpeng’s path to achieve full automation, though Wu couldn’t share much detail on how 5G is used. When unmanned driving is viable, Xpeng will explore “a lot of exciting features” that go into a car when the driver’s hands are freed. Xpeng’s electric SUV is already available in Norway, and the company is looking to further expand globally.

Republican antitrust bill would block all Big Tech acquisitions

By Taylor Hatmaker

There are about to be a lot of antitrust bills taking aim at Big Tech, and here’s one more. Senator Josh Hawley (R-MO) rolled out a new bill this week that would take some severe measures to rein in Big Tech’s power, blocking mergers and acquisitions outright.

The “Trust-Busting for the Twenty-First Century Act” would ban any acquisitions by companies with a market cap of more than $100 billion, including vertical mergers. The bill also proposes changes that would dramatically heighten the financial pain for companies caught engaging in anti-competitive behavior, forcing any company that loses an antirust suit to forfeit profits made through those business practices.

At its core, Hawley’s legislation would snip some of the red tape around antitrust enforcement by amending the Sherman Act, which made monopolies illegal, and the Clayton Act, which expanded the scope of illegal anti-competitive behavior. The idea is to make it easier for the FTC and other regulators to deem a company’s behavior anti-competitive — a key criticism of the outdated antitrust rules that haven’t kept pace with the realities of the tech industry.

The bill isn’t likely to get too far in a Democratic Senate, but it’s not insignificant. Sen. Amy Klobuchar (D-MN), who chairs the Senate’s antitrust subcommittee, proposed legislation earlier this year that would also create barriers for dominant companies with a habit of scooping up their competitors. Klobuchar’s own ideas for curtailing Big Tech’s power similarly focus on reforming the antitrust laws that have shaped U.S. business for more than a century.

Click to access The%20Trust-Busting%20for%20the%20Twenty-First%20Century%20Act.pdf

The Republican bill may have some overlap with Democratic proposals, but it still hits some familiar notes from the Trump era of hyperpartisan Big Tech criticism. Hawley slams “woke mega-corporations” in Silicon Valley for exercising too much power over the information and products that Americans consume. While Democrats naturally don’t share that critique, Hawley’s bill makes it clear that antitrust reform targeting Big Tech is one policy area where both political parties could align on the ends, even if they don’t see eye to eye on the why.

Hawley’s bill is the latest, but it won’t be the last. Rep. David Cicilline (D-RI), who spearheads tech antitrust efforts in the House, previously announced his own plans to introduce a flurry of antitrust reform bills rather than one sweeping piece of legislation. Those bills, which will be more narrowly targeted to make them difficult for tech lobbyists to defeat, are due out in May.

❌