FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Yesterday — August 2nd 2021Your RSS feeds

Can your startup support a research-based workflow?

By Ram Iyer
João Graça Contributor
João Graça is CTO and co-founder of Unbabel, an AI-powered language operations platform that enables any agent to communicate in any language.

The President’s Council of Advisors on Science and Technology predicts that U.S. companies will spend upward of $100 billion on AI R&D per year by 2025. Much of this spending today is done by six tech companies — Microsoft, Google, Amazon, IBM, Facebook and Apple, according to a recent study from CSET at Georgetown University. But what if you’re a startup whose product relies on AI at its core?

Can early-stage companies support a research-based workflow? At a startup or scaleup, the focus is often more on concrete product development than research. For obvious reasons, companies want to make things that matter to their customers, investors and stakeholders. Ideally, there’s a way to do both.

Before investing in staffing an AI research lab, consider this advice to determine whether you’re ready to get started.

Compile the right research team

Assuming it’s your organization’s priority to do innovative AI research, the first step is to hire one or two researchers. At Unbabel, we did this early by hiring Ph.D.s and getting started quickly with research for a product that hadn’t been developed yet. Some researchers will build from scratch and others will take your data and try to find a pre-existing model that fits your needs.

While Google’s X division may have the capital to focus on moonshots, most startups can only invest in innovation that provides them a competitive advantage or improves their product.

From there, you’ll need to hire research engineers or machine learning operations professionals. Research is only a small part of using AI in production. Research engineers will then release your research into production, monitor your model’s results and refine the model if it stops predicting well (or otherwise is not operating as planned). Often they’ll use automation to simplify monitoring and deployment procedures as opposed to doing everything manually.

None of this falls within the scope of a research scientist — they’re most used to working with the data sets and models in training. That said, researchers and engineers will need to work together in a continuous feedback loop to refine and retrain models based on actual performance in inference.

Choose the problems you want to solve

The CSET research cited above shows that 85% of AI labs in North America and Europe do some form of basic AI research, and less than 15% focus on development. The rest of the world is different: A majority of labs in other countries, such as India and Israel, focus on development.

Cloud infrastructure market kept growing in Q2, reaching $42B

By Ron Miller

It’s often said in baseball that a prospect has a high ceiling, reflecting the tremendous potential of a young player with plenty of room to get better. The same could be said for the cloud infrastructure market, which just keeps growing with little sign of slowing down any time soon. The market hit $42 billion in total revenue with all major vendors reporting, up $2 billion from Q1.

Synergy Research reports that the revenue grew at a speedy 39% clip, the fourth consecutive quarter that it has increased. AWS led the way per usual, but Microsoft continued growing at a rapid pace and Google also kept the momentum going.

AWS continues to defy market logic, actually increasing growth by 5% over the previous quarter at 37%, an amazing feat for a company with the market maturity of AWS. That accounted for $14.81 billion in revenue for Amazon’s cloud division, putting it close to a $60 billion run rate, good for a market leading 33% share. While that share has remained fairly steady for a number of years, the revenue continues to grow as the market pie grows ever larger.

Microsoft grew even faster at 51%, and while Microsoft cloud infrastructure data isn’t always easy to nail down, with 20% of market share according to Synergy Research, that puts it at $8.4 billion as it continues to push upward with revenue up from $7.8 billion last quarter.

Google too continued its slow and steady progress under the leadership of Thomas Kurian, leading the growth numbers with a 54% increase in cloud revenue in Q2 on revenue of $4.2 billion, good for 10% market share, the first time Google Cloud has reached double figures in Synergy’s quarterly tracking data. That’s up from $3.5 billion last quarter.

Synergy Research cloud infrastructure market share chart.

Image Credits: Synergy Research

After the Big 3, Alibaba held steady over Q1 at 6% (but will only report this week) with IBM falling a point from Q1 to 4% as Big Blue continues to struggle in pure infrastructure as it makes the transition to more of a hybrid cloud management player.

John Dinsdale, chief analyst at Synergy, says that the big three are spending big to help fuel this growth. “Amazon, Microsoft and Google in aggregate are typically investing over $25 billion in capex per quarter, much of which is going towards building and equipping their fleet of over 340 hyperscale data centers,” he said in a statement.

Meanwhile Canalys had similar numbers, but saw the overall market slightly higher at $47 billion. Their market share broke down to Amazon with 31%, Microsoft with 22% and Google with 8% of that total number.

Canalys analyst Blake Murray says that part of the reason companies are shifting workloads to the clouds is to help achieve environmental sustainability goals as the cloud vendors are working toward using more renewable energy to run their massive data centers.

“The best practices and technology utilized by these companies will filter to the rest of the industry, while customers will increasingly use cloud services to relieve some of their environmental responsibilities and meet sustainability goals,” Murray said in a statement.

Regardless of whether companies are moving to the cloud to get out of the data center business or because they hope to piggyback on the sustainability efforts of the big 3, companies are continuing a steady march to the cloud. With some estimates of worldwide cloud usage at around 25%, the potential for continued growth remains strong, especially with many markets still untapped outside the U.S.

That bodes well for the big three and for other smaller operators who can find a way to tap into slices of market share that add up to big revenue. “There remains a wealth of opportunity for smaller, more focused cloud providers, but it can be hard to look away from the eye-popping numbers coming out of the big three,” Dinsdale said.

In fact, it’s hard to see the ceiling for these companies any time in the foreseeable future.

Before yesterdayYour RSS feeds

BioNTech founder Uğur Şahin and Mayfield’s Ursheet Parikh are coming to Disrupt

By Darrell Etherington

It’s hard to argue that any technology company has had a greater impact in the past decade than BioNTech, the mRNA-based therapeutics pioneer behind the world’s most widely-used COVID-19 vaccine. Developed in record time in partnership with Pfizer, thanks to an existing partnership to work on immunization for the common flu, BioNTech’s mRNA inoculation is without a doubt one of the biggest medical innovations of the past century.

BioNTech co-founder and CEO Uğur Şahin isn’t stopping there, of course: the company recently announced that it would be developing an mRNA-based vaccine targeting malaria, an illness that still kills more than 400,000 people per year. It also has treatments for a range of cancers in process in its development pipeline, and has announced plans to address HIV and tuberculosis with future candidates.

This year at Disrupt 2021, Şahin will join us along with Mayfield Fund Partner Ursheet Parikh, a key investor in BioNTech. Both Şahin and Parikh will be talking to us about how the COVID-19 vaccine came to be, but more importantly, about what the future holds for mRNA technology and its potential to address a wide range of chronic healthcare problems that have been tough challenges to solve for decades or even centuries. We’ll also be talking about what it means to build a biotech startup with true platform potential, and how that might differ now as compared to what investors were looking for just a few short years ago.

Şahin and Parikh are just two of the many high-profile speakers who will be on our Disrupt Stage and the Extra Crunch Stage. During the three-day event, writer, director, actor and Houseplant co-founder Seth Rogen will be joined by Houseplant Chief Commercial Officer Haneen Davies and co-founder and CEO Michael Mohr to talk about the business of weed, Secretary of Transportation Pete Buttigieg will talk about the future of getting around and the government’s role in partnering with startups, and Coinbase CEO Brian Armstrong will dig into the volatile world of cryptocurrency and his company’s massive direct listing earlier this year.

Disrupt 2021 wouldn’t be complete without Startup Battlefield, the competition that launched some of the world’s biggest tech companies, including Cloudflare and Dropbox. Join Secretary Buttigieg and over 10,000 of the startup world’s most influential people at Disrupt 2021 online this September 21-23. Check out the Disrupt 2021 agenda. We’ll add even more speakers.

Buy your Disrupt pass before July 30 at 11:59 pm (PT), and get ready to join the big, bold and influential — for less than $100.

Get your pass to attend now for under $99 for a limited time!

In growth marketing, creative is the critical X factor

By Walter Thompson
Jonathan Martinez Contributor
Jonathan Martinez is a former YouTuber, UC Berkeley alum and growth marketing nerd who's helped scale Uber, Postmates, Chime and various startups.

As we move toward a privacy-centric, less targeted future of growth marketing, the biggest lever will become creative on paid social channels such as the Facebooks of the world. The loss of attribution from our good friend iOS 14.5 has accelerated this trend, but channels have increasingly placed efforts toward automating their ad platforms.

Due to this, I believe that every growth marketing engine should have a proper creative testing framework in place — be it a seed-stage startup or a behemoth like Google.

After three years at Postmates, consulting for various startups, and most recently at Uber, I’ve seen the landscape of marketing change in a multitude of ways. However, what we’re seeing now is being orchestrated by factors out of our control, causing a dawn of shifts unlike anything I’ve seen. Creative has subsequently risen to become the most powerful lever in a paid social account.

The foundation

If you’re looking to leverage the power of creative and succeed with paid social marketing, you’re thinking right. What you need is a creative testing framework: A structured and consistent way to test new creative assets.

Here’s a breakdown of the pieces a creative testing framework needs to be successful:

  • A defined testing schedule.
  • A structured theme approach.
  • A channel-specific strategy.

Creative has become the most powerful lever in a paid social account.

Testing creative should be a constant and iterative process that follows a defined testing schedule. A goal and structure can be as simple as testing five new creative assets per week. Inversely, it can be as complex as testing 60 new assets consisting of multiple themes and copy variations.

For a lower spending account, the creative testing should be leaner due to limited event signal and vice versa with a higher spending account. The most important aspect is that the testing continues to move the needle as you search for your next “champion” asset.

creating a testing schedule for different creative themes

4 themes x 3 variants per theme x 5 copy variations = 60 assets. Image Credits: Jonathan Martinez

After setting a testing schedule, define the core themes of your business and vertical rather than testing a plethora of random ideas. This applies to the creative asset as well as the copy and what the key value props are to your product or service. As you start to analyze the creative data, you’ll find it easier to decide what to double down on or cut from testing with this structure. Think of this as a wireframe that you either expand or trim throughout testing sprints.

For a fitness app like MyFitnessPal, it can be structured as follows:

  • Themes (product screenshots, images of people using it, UGC testimonials, before/after images).
  • Messaging (segmented value props, promo, FUD).

It’s vital to make sure you have a channel-specific approach, as each one will differ in creative best practices along with testing capabilities. What works on Facebook may not work on Snapchat or the numerous other paid social channels. Don’t be discouraged if creative between channels perform differently, although I do recommend parity testing. If you already have the creative asset for one channel, it doesn’t hurt to resize and format for the remaining channels.

Determining wins

Equally important to the creative is proper event selection and a statistically significant threshold to abide by throughout all testing. When selecting an event to use for creative testing, it’s not always possible to use your north-star metric depending on how high your CACs are. For example, if you’re selling a high-ticket item and the CACs are in the hundreds, it would take an enormous amount of spend to reach stat-sig on each creative asset. Instead, pick an event that’s more upper funnel and a strong indicator of a user’s likelihood of converting.

Using a more upper funnel event leads to faster learnings (blue line).

Using a more upper-funnel event leads to faster learnings (blue line). Image Credits: Jonathan Martinez

It’s important to select a percentage that stays consistent across all creative testing when deciding on which statistically significant percentage to use. As a rule of thumb, I like to use a certainty of 80%+, because it allows for enough confirmation along with the ability to make quicker decisions. A great (and free) online calculator is Neil Patel’s A/B Testing Significance Calculator.

Make or break

You’re scrolling through a social feed, a sleek gold pendant catches your eye, but all the messaging has is the brand name and product specifications. It hooked your attention, but what did it do to reel you in? Think about it: What are you doing to not only hook, but reel people in with “creative” — the make or break it factor in paid social growth marketing?

Circumventing iOS 14.5 data loss

Creative testing is only getting tougher for mobile campaigns as iOS 14.5 obfuscates user data, but that doesn’t equal impossible and simply means we need to get craftier. There are a variety of hacks that can be implemented to help gain clear insight on how creative is performing — some may not last forever and others may be timeless.

Amid all the privacy restrictions, we still have access to a huge population of users on Android that we should take advantage of. Instead of running all creative tests on iOS, Android can be used as a clear way to gather insights, as privacy restrictions haven’t rolled out on those devices yet. The data gathered from Android tests can then be taken directionally and applied to iOS campaigns. It’s only a matter of time until Android data is also at the mercy of data restrictions, so use this workaround to inform iOS campaigns now.

If running Android campaigns isn’t a viable option, another quick and easy solution is to throw up a website lead form to gauge the conversion rate from creative asset to a completed form. The user experience will certainly not be nearly as amazing as evergreen, but this can be used to gain insight for a short period of time (and small percentage of budget).

When crafting the lead form, think of questions that are both qualifying and would indicate someone completing your north-star event on the evergreen experience. After running people through the lead form, communications can be sent to convert them so ad dollars are being put to good use.

Placing efforts by account stage

The testing efforts for creative asset types should differ widely by account stage and can be broken down into three I’s: imitation, iteration, innovation.

The type of creative testing should vary over time.

The type of creative testing should vary over time. Image Credits: Jonathan Martinez

The earlier an account stage, the more your creative direction should rely on what’s proven to work by other advertisers. These other advertisers have spent thousands proving performance with their assets, and you can gain strong insight from them. As time passes, you can slightly slow derivation from other advertisers while focusing on iterating on the best performers. If a percentage had to be placed, I would target 80% of efforts on imitation early on, with iteration gaining steam, and innovation being the final, heavy-lagging prong.

This isn’t to say that innovation can’t be attempted early on if there are great ideas, but generally a more mature company can afford to spend heaps to validate their innovative ideas. Whether you have an in-house design team or are working with freelancers, it’ll also be much easier to spin up 50 variations than it will be to think of and design 50 different innovative assets. Imitating and iterating will make your early testing exponentially more efficient.

Leveraging competitor insights

Brainstorming and trying to imagine the most beautiful, eye-catching, hook-inducing creative doesn’t always happen within seconds, let alone minutes or hours. This is where utilizing competitor insights comes into play. The most abundant resource is the Facebook Ads Library bar none, because it contains all the creative assets that every advertiser is using across the platform. It always surprises me how few actually know of this free and powerful tool.

When browsing through competitors or best-in-class advertisers in this library, a sign of a great performing creative is how long an advertiser has been running specific assets. How does one find that? The date of when an advertiser started running their creative is stamped conveniently on each asset — this is beyond powerful. I can spend hours scanning through creative assets, and each advertiser provides even more intel and inspiration.

Creative should be at the top of the list as you think of where to place efforts on your paid social growth marketing. We must have a hacky mindset as data becomes more obscure, but with that mindset comes separating the winners from the losers. The types of strategies put in motion will vary over time, but what won’t vary is the importance on strong creative, the make it or break it factor to success.

Despite the hype, construction tech will be hard to disrupt

By Annie Siebert
David Ward Contributor
David Ward is a 30-year tech industry veteran, entrepreneur and the CEO and founder of Safe Site Check In.

From the outside looking in, the construction industry appears ripe for tech innovation. The industry represents 6.3% of the U.S. GDP. There are close to 1 million general contractors (GCs) in the country, and anywhere between 3 million and 5 million workers on job sites every day.

Meanwhile, there’s a common (if somewhat justified) belief that construction firms are slow to adopt technology and are behind the digital curve.

Success in construction tech will come down to proving the need for the technology, delivering immediate ROI, and ensuring workers know how to use it on the first try.

But not every construction company is a technology laggard. While GCs are historically slower to adopt new technologies, this doesn’t necessarily make them behind the times. About 60% of construction companies have R&D departments for new technology, and the largest construction firms have substantial R&D budgets. Yet 35.9% of employees are hesitant to try new technology, according to JB Knowledge.

One way to interpret this is that there is a strong interest and need to take advantage of newer construction-centric technologies, but only if they’re easy to use, easy to deploy or access while on a job site, and improve productivity almost immediately.

These factors have made construction tech appealing to investors, who have poured at least $3 billion into the sector. Is construction tech the “it” place right now? Is it ripe for disruption, the way VC investors find attractive? If that’s true, what went wrong at Katerra? Is Procore justified in losing $1 for every $4 in revenue? And why does so little investment go into improving productivity at the job site where GC money is made — or lost — compared to back-office operations?

My experience to date says that construction is different from other sectors because of the significant variation among projects that originates in the way projects are financed, how risks are managed and the factors that drive variation among projects. Construction’s differences are not easily mitigated via data processing, as compared to fintech, for example, where all money is data-amenable to software processing. Addressing project variations will be key to succeeding in construction tech beyond the back office. Here are the critical factors to consider.

Project financing makes capital investment more difficult. While the Commerce Department reported that construction spending in the U.S. reached a record high of $1.459 trillion in November 2020, this doesn’t mean there are unlimited opportunities for construction tech. The reality is that GCs make few capital investments because they must fund investments in technology out of operating cash flow.

Construction projects are typically funded incrementally in phases as the project demonstrates progress. Delays or accidents can have a huge effect on cash flow. Overhead and G&A cost burdens are hated. Asking a GC to license technology as a capital purchase doesn’t always make sense.

GC ownership and business structure also make large capital investment more difficult. Most GC firms were founded by tradespeople and either started as, or remain, family-owned firms. Borrowing what’s considered the “family’s money” is a much more risk-averse decision compared to the way larger corporations evaluate productivity investments and put assets at risk.

Stumble-proof robot adapts to challenging terrain in real time

By Devin Coldewey

Robots have a hard time improvising, and encountering an unusual surface or obstacle usually means an abrupt stop or hard fall. But researchers at Facebook AI have created a new model for robotic locomotion that adapts in real time to any terrain it encounters, changing its gait on the fly to keep trucking when it hits sand, rocks, stairs, and other sudden changes.

Although robotic movement can be versatile and exact, and robots can “learn” to climb steps, cross broken terrain and so on, these behaviors are more like individual trained skills that the robot switches between. And although robots like Spot famously can spring back from being pushed or kicked, the system is really just working to correct a physical anomaly while pursuing an unchanged policy of walking. There are some adaptive movement models, but some are very specific (for instance this one based on real insect movements) and others take long enough to work that the robot will certainly have fallen by the time they take effect.

Rapid Motor Adaptation, as the team calls it, came from the idea that humans and other animals are able to quickly, effectively, and unconsciously change the way they walk to fit different circumstances.

“Say you learn to walk and for the first time you go to the beach. Your foot sinks in, and to pull it out you have to apply more force. It feels weird, but in a few steps you’ll be walking naturally just as you do on hard ground. What’s the secret there?” asked senior researcher Jitendra Malik, who is affiliated with Facebook AI and UC Berkeley.

Certainly if you’ve never encountered a beach before, but even later in life when you have, you aren’t entering some special “sand mode” that lets you walk on soft surfaces. The way you change your movement happens automatically and without any real understanding of the external environment.

Visualization of the simulation environment. Of course the robot would not perceive any of this visually. Image credit: Berkeley AI Research, Facebook AI Research and CMU

“What’s happening is your body responds to the differing physical conditions by sensing the differing consequences of those conditions on the body itself,” Malik explained — and the RMA system works in similar fashion. “When we walk in new conditions, in a very short time, half a second or less, we have made enough measurements that we are estimating what these conditions are, and we modify the walking policy.”

The system was trained entirely in simulation, in a virtual version of the real world where the robot’s small brain (everything runs locally on the on-board limited compute unit) learned to maximize forward motion with minimum energy and avoid falling by immediately observing and responding to data coming in from its (virtual) joints, accelerometers, and other physical sensors.

To punctuate the total internality of the RMA approach, Malik notes that the robot uses no visual input whatsoever. But people and animals with no vision can walk just fine, so why shouldn’t a robot? But since it’s impossible to estimate the “externalities” such as the exact friction coefficient of the sand or rocks it’s walking on, it simply keeps a close eye on itself.

“We do not learn about sand, we learn about feet sinking,” said co-author Ashish Kumar, also from Berkeley.

Ultimately the system ends up having two parts: a main, always-running algorithm actually controlling the robot’s gait, and an adaptive algorithm running in parallel that monitors changes to the robot’s internal readings. When significant changes are detected, it analyzes them — the legs should be doing this, but they’re doing this, which means the situation is like this — and tells the main model how to adjust itself. From then on the robot only thinks in terms of how to move forward under these new conditions, effectively improvising a specialized gait.

Footage of the robot not falling as it traverses various tough surfaces.

Image Credits: Berkeley AI Research, Facebook AI Research and CMU

After training in simulation, it succeeded handsomely in the real world, as the news release describes it:

The robot was able to walk on sand, mud, hiking trails, tall grass and a dirt pile without a single failure in all our trials. The robot successfully walked down stairs along a hiking trail in 70% of the trials. It successfully navigated a cement pile and a pile of pebbles in 80% of the trials despite never seeing the unstable or sinking ground, obstructive vegetation or stairs during training. It also maintained its height with a high success rate when moving with a 12kg payload that amounted to 100% of its body weight.

You can see examples of many of these situations in videos here or (very briefly) in the gif above.

Malik gave a nod to the research of NYU professor Karen Adolph, whose work has shown how adaptable and freeform the human process of learning how to walk is. The team’s instinct was that if you want a robot that can handle any situation, it has to learn adaptation from scratch, not have a variety of modes to choose from.

Just as you can’t build a smarter computer vision system by exhaustively labeling and documenting every object and interaction (there will always be more), you can’t prepare a robot for a diverse and complex physical world with 10, 100, even thousands of special parameters for walking on gravel, mud, rubble, wet wood, etc. For that matter you may not even want to specify anything at all beyond the general idea of forward motion.

“We don’t pre-program the idea that it has for legs, or anything about the morphology of the robot,” said Kumar.

This means the basis of the system — not the fully trained one, which ultimately did mold itself to quadrupedal gaits — can potentially be applied not just to other legged robots, but entirely different domains of AI and robotics.

“The legs of a robot are similar to the fingers of a hand; the way that legs interact with environments, fingers interact with objects,” noted co-author Deepak Pathak, of Carnegie Mellon University. “The basic idea can be applied to any robot.”

Even further, Malik suggested, the pairing of basic and adaptive algorithms could work for other intelligent systems. Smart homes and municipal systems tend to rely on preexisting policies, but what if they adapted on the fly instead?

For now the team is simply presenting their initial findings in a paper at the Robotics: Science & Systems conference  and acknowledge that there is a great deal of follow-up research to do. For instance building an internal library of the improvised gaits as a sort of “medium term” memory, or using vision to predict the necessity of initiating a new style of locomotion. But the RMA approach seems to be a promising new approach for an enduring challenge in robotics.

Toyota Research Institute shows how its robotics work with difficult surfaces in the home

By Brian Heater

Following this morning’s announcement that Hyundai has closed its acquisition of Boston Dynamics, another automotive company has posted some robotics news. The Toyota Research Institute announcement is decidedly less earthshaking than that big deal — if anything, it’s more of a progress check on what the division has been working on.

Of course, incremental updates tend to be the name of the game when it comes to robotics of all sorts. This does, however, shed some interesting light on the work TRI has been doing in the home. Today the company announced some key advances to robotics it has designed to perform domestic tasks.

“TRI roboticists were able to train robots to understand and operate in complicated situations that confuse most other robots, including recognizing and responding to transparent and reflective surfaces in a variety of circumstances,” the Institute writes in a blog post.

Image Credits: Toyota Research Institute

With settings like kitchens, the robots come in contact with a variety of transparent and reflective surfaces — a hurdle for traditional vision systems. Specifically in the kitchen, things like a transparent glass or reflective appliance can create an issue.

“To overcome this, TRI roboticists developed a novel training method to perceive the 3D geometry of the scene while also detecting objects and surfaces,” TRI Robotics VP Max Bajracharya said in a post describing the research. “This combination enables researchers to use large amounts of synthetic data to train the system.” Using synthetic data also alleviates the need for time-consuming, expensive or impractical data collection and labeling.

With an aging population in its native Japan, Toyota has made eldercare a key focus in its ongoing robotics research. So it makes a lot of sense that sort of robotics tasks form a core of much of its research in the category, as well as those elements that bleed into the work it’s doing on Woven City. And certainly the company gets credit for putting in some work here, before the orchestrated appearances we’ve seen of robotics offerings from companies like Samsung.

Image Credits: Toyota Research Institute

“It’s not only about keeping people in their homes longer and living independently,” Bajracharya  recently told me in an interview. “That’s one aspect of it — but in Japan, in 20-30 years, the number of people who are over 65 will roughly be the same as the number of people who are under 65. That’s going to have a really interesting socioeconomic impact, in terms of the workforce. It’s probably going to be much older and we at Toyota are looking at how these people can keep doing their jobs, so they can get the fulfillment from doing their jobs or staying at home longer. We don’t want to just replace the people. We really think about how we stay human-centered and amplify people.”

Experts from Toyota, Ford and Hyundai will discuss automotive robotics at TC Sessions: Mobility

By Brian Heater

The events of the past year have only served to accelerate interest in all things robotics and automation. It’s a phenomenon we’ve seen across a broad range of categories, and automotive is certainly no different.

Of course, carmakers are no strangers to the world of robotics. Automation has long played a key role in manufacturing, and more recently, robotics have played another central role in the form of self-driving vehicles. For this panel, however, we’re going to look past those much-discussed categories. Of late, carmakers have been investing heavily to further fuel innovation in the category.

It’s a fascinating space — and one that covers a broad range of cross-sections, from TRI’s (Toyota) Woven City project to Ford’s recent creation of a research facility at U of M to Hyundai’s concept cars and acquisition of Boston Dynamics. At TC Sessions: Mobility on June 9, we will be joined by a trio of experts from these companies for what’s sure to be a lively discussion on the topic.

Max Bajracharya is Vice President of Robotics at Toyota Research Institute. Previously serving as its Director of Robotics, he leads TRI’s work in robotics. He previously served at Alphabet’s X, as part of the Google Robotics team.

Mario Santillo is a Technical Expert at Ford. Previously serving as a Research Engineer for the company, he’s charged with helping lead the company’s efforts at a recently announced $75 million research facility at the University of Michigan, Ann Arbor. The work includes both Ford’s own robotics work, as well as partnerships with startups like Agility.

Ernestine Fu is a director at Hyundai Motor Group. She heads development at the newly announced New Horizons Studio, a group tasked with creating Ultimate Mobility Vehicles (UMVs). She also serves as an adjunct professor at Stanford University, where she received a BS, MS, MBA and PhD.

Get ready to talk robots at TC Sessions: Mobility. Grab your passes right now for $125 and hear from today’s biggest mobility leaders before our prices go up at the door.

 

Beta Technologies adds $368 million in Series A funding for its electric aviation ecosystem

By Aria Alamalhodaei

Electric aviation startup Beta Technologies closed a $368 million Series A funding round on Tuesday, with investments from Amazon’s Climate Pledge Fund. The new capital is the second round of funding announced by the company this year, after the company raised $143 million in private capital in March.

The funding round was led by Fidelity Management & Research Company with undisclosed additions from Amazon’s Climate Pledge Fund, a $2 billion fund established in September 2019 to advance the development of sustainable technologies. The Climate Pledge fund has also made contributions toward electric vehicle manufacturer Rivian, battery recycler Redwood Materials and ZeroAvia, a hydrogen fuel cell aviation company.

The company’s valuation is now at $1.4 billion, CNBC reported, putting it in a small circle of electric vertical take-off and landing (eVTOL) companies to have achieved valuations at over a billion dollars.

Unlike developers Joby Aviation and Archer Aviation, who have each also achieved valuations over the billion-dollar mark, Beta is not primarily focused on air taxis. Instead, it’s been targeting defense applications, cargo delivery, and medical logistics, as well as building out its network of rapid-charging systems in the northeast U.S. Its debut aircraft, the ALIA-250c, was built to serve these various solutions by being capable of carrying six people or a pilot and 1,500 pounds.

The Vermont-based startup has already scored major partnerships in all of these industries, including with United Therapeutics to transport synthetic organs for human transplant; UPS, who purchased 10 ALIA aircraft with the option of buying 140 more; and the U.S. Air Force.

The company has not entirely ignored passenger transportation, however, announcing last month a partnership with Blade Urban Air Mobility for five aircraft to be delivered in 2024.

Beta was the first company to be awarded airworthiness approval from the U.S. Air Force. The company expects to sign a contract in June with the Air Force to allow access to Beta’s aircraft and flight simulators in Washington, D.C. and Springfield, Ohio. However, it still must achieve certification with the Federal Aviation Administration.

The funds will be used to refine the ALIA’s electric propulsion system and controls, as well as to build out manufacturing space, including expanding its footprint in Vermont on land at the Burlington International Airport, the company said in a news release Tuesday.

❌