Google today announced a subtle but welcome refresh of its mobile search experience. The idea here is to provide easier to read search results and a more modern look with a simpler, edge-to-edge design.
From what we’ve seen so far, this is not a radically different look, but the rounded and slightly shaded boxes around individual search results have been replaced with straight lines, for example, while in other places, Google has specifically added more roundness. You’ll find changes to the circles around the search bar and some tweaks to the Google logo. “We believe it feels more approachable, friendly, and human,” a Google spokesperson told me. There’s a bit more whitespace in places, too, as well as new splashes of color that are meant to help separate and emphasize certain parts of the page.
“Rethinking the visual design for something like Search is really complex,” Google designer Aileen Cheng said in today’s announcement. “That’s especially true given how much Google Search has evolved. We’re not just organizing the web’s information, but all the world’s information. We started with organizing web pages, but now there’s so much diversity in the types of content and information we have to help make sense of.”
Google is also extending its use of the Google Sans font, which you are probably already quite familiar with thanks to its use in Gmail and Android. “Bringing consistency to when and how we use fonts in Search was important, too, which also helps people parse information more efficiently,” Aileen writes.
In many ways, today’s refresh is a continuation of the work Google did with its mobile search refresh in 2019. At that time, the emphasis, too, was on making it easier for users to scan down the page by adding site icons and other new visual elements to the page. The work of making search results pages more readable is clearly never done.
For the most part, though, comparing the new and old design, the changes are small. This isn’t some major redesign but we’re talking about minor tweaks that the designers surely obsessed over but that the users may not even really notice. Now if Google had made it significantly easier to distinguish ads from the content you are actually looking for, that would’ve been something.
Google has threatened to close its search engine in Australia — as it dials up its lobbying against draft legislation that is intended to force it to pay news publishers for reuse of their content.
Facebook would also be subject to the law. And has previously said it would ban news from being shared on its products owing if the law was brought in, as well as claiming it’s reduced its investment in the country as a result of the legislative threat.
“The principle of unrestricted linking between websites is fundamental to Search. Coupled with the unmanageable financial and operational risk if this version of the Code were to become law it would give us no real choice but to stop making Google Search available in Australia,” Google warned today.
Last August the tech giant took another pot-shop at the proposal, warning that the quality of its products in the country could suffer and might stop being free if the government proceeded with a push to make the tech giants share ad revenue with media businesses.
Since last summer Google appears to have changed lobbying tack — apparently giving up its attempt to derail the law entirely in favor of trying to reshape it to minimize the financial impact.
Its latest bit of lobbying is focused on trying to eject the most harmful elements (as it sees it) of the draft legislation — while also pushing its News Showcase program, which it hastily spun up last year, as an alternative model for payments to publishers that it would prefer becomes the vehicle for remittances under the Code.
The draft legislation for Australia’s digital news Code which is currently before the parliament includes a controversial requirement that tech giants, Google and Facebook, pay publishers for linking to their content — not merely for displaying snippets of text.
Yet Google has warned Australia that making it pay for “links and snippets” would break how the Internet works.
In a statement to the Senate Economics Committee today, its VP for Australia and New Zealand, Mel Silva, said: “This provision in the Code would set an untenable precedent for our business, and the digital economy. It’s not compatible with how search engines work, or how the internet works, and this is not just Google’s view — it has been cited in many of the submissions received by this Inquiry.
“The principle of unrestricted linking between websites is fundamental to Search. Coupled with the unmanageable financial and operational risk if this version of the Code were to become law it would give us no real choice but to stop making Google Search available in Australia.”
Google is certainly not alone in crying foul over a proposal to require payments for links.
Sir Tim Berners-Lee, inventor of the world wide web, has warned that the draft legislation “risks breaching a fundamental principle of the web by requiring payment for linking between certain content online”, among other alarmed submissions to the committee.
In written testimony he goes on:
“Before search engines were effective on the web, following links from one page to another was the only way of finding material. Search engines make that process far more effective, but they can only do so by using the link structure of the web as their principal input. So links are fundamental to the web.
“As I understand it, the proposed code seeks to require selected digital platforms to have to negotiate and possibly pay to make links to news content from a particular group of news providers.
“Requiring a charge for a link on the web blocks an important aspect of the value of web content. To my knowledge, there is no current example of legally requiring payments for links to other content. The ability to link freely — meaning without limitations regarding the content of the linked site and without monetary fees — is fundamental to how the web operates, how it has flourished till present, and how it will continue to grow in decades to come.”
However it’s notable that Berners-Lee’s submission does not mention snippets. Not once. It’s all about links.
Meanwhile Google has just reached an agreement with publishers in France — which they say covers payment for snippets of content.
In the EU, the tech giant is subject to an already reformed copyright directive that extended a neighbouring right for news content to cover reuse of snippets of text. Although the directive does not cover links or “very short extracts”.
In France, Google says it’s only paying for content “beyond links and very short extracts”. But it hasn’t said anything about snippets in that context.
French publishers argue the EU law clearly does cover the not-so-short text snippets that Google typically shows in its News aggregator — pointing out that the directive states the exception should not be interpreted in a way that impacts the effectiveness of neighboring rights. So Google looks like it would have a big French fight on its hands if it tried to deny payments for snippets.
But there’s still everything to play for in Australia. Hence, down under, Google is trying to conflate what are really two separate and distinct issues (payment for links vs payment for snippets) — in the hopes of reducing the financial impact vs what’s already baked into EU law. (Although it’s only been actively enforced in France so far, which is ahead of other EU countries in transposing the directive into national law).
In Australia, Google is also heavily pushing for the Code to “designate News Showcase” (aka the program it launched once the legal writing was on the wall about paying publishers) — lobbying for that to be the vehicle whereby it can reach “commercial agreements to pay Australian news publishers for value”.
Of course a commercial negotiation process is preferable (and familiar) to the tech giant vs being bound by the Code’s proposed “final offer arbitration model” — which Google attacks as having “biased criteria”, and claims subjects it to “unmanageable financial and operational risk”.
“If this is replaced with standard commercial arbitration based on comparable deals, this would incentivise good faith negotiations and ensure we’re held accountable by robust dispute resolution,” Silva also argues.
A third provision the tech giant is really keen gets removed from the current draft requires it to give publishers notification ahead of changes to its algorithms which could affect how their content is discovered.
“The algorithm notification provision could be adjusted to require only reasonable notice about significant actionable changes to Google’s algorithm, to make sure publishers are able to respond to changes that affect them,” it suggests on that.
It’s certainly interesting to consider how, over a few years, Google’s position has moved from ‘we’ll never pay for news’ — pre- any relevant legislation — to ‘please let us pay for licensing news through our proprietary licensing program’ once the EU had passed a directive now being very actively enforced in France (with the help of competition law) and also with Australia moving toward inking a similar law.
Turns out legislation can be a real tech giant mind-changer.
Of course the idea of making anyone pay to link to content online is obviously a terrible idea — and should be dropped.
But if that bit of the draft is a negotiating tactic by Australians lawmakers to get Google to accept that it will have to pay publishers something then it appears to be winning one.
And while Google’s threat to close down its search engine might sound ‘full on’, as Silva suggests, when you consider how many alternative search engines exist it’s hardly the threat it once was.
Moderna, the biotech company behind one of the two mRNA-based vaccines currently being rolled out globally to stem the tide of COVID-19, has announced that it will purse development programs around three new vaccine candidates in 2021. These include potential vaccines for HIV, seasonal flu and the Nipah virus. Moderna’s development and clinical trial of its COVID-19 vaccine is among the fastest in history, and thus far its results have been very promising, buoying hopes for the efficacy of other preventative treatments being generated using this technology which is new to human clinical use.
An mRNA vaccine differs from typical, historical vaccines because it involves providing a person with just a set of instructions on how to build specific proteins that will trigger a body’s natural defenses. The mRNA instructions, which are temporary and do not affect a person’s actual DNA, simply prompt the body’s cells to produce proteins that mirror those used by a virus to attach to and infect cells. The independent proteins are then fought off by a person’s natural immune response, which provides a lasting lesson in how to fight off any future proteins that match that profile, including those which help viruses attach to and infect people.
Moderna’s new programs will target not only seasonal flu, but also a combinatory vaccine that could target both the regular flu and SARS-CoV-2, the virus that leads to COVID-19. The HIV candidate, which is developed in collaboration with both the AIDS Vaccine Initiative and the Bill and Melinda Gates Foundation, is expected to enter into Phase 1 trials this year, as will the flue face. Nipah virus is a highly lethal illness that can cause respiratory and neurological symptoms, and which is particularly a threat in India, Bangladesh, Malaysia and Singapore.
mRNA-based vaccines have long held potential for future vaccine development, in part because of their flexibility and programmability, and in part because they don’t use any active or dormant virus, which reduces their risks in terms of causing any direct infections up front. The COVID-19 pandemic spurred significant investment and regulatory/health and safety investment into the technology, paving the way for its use in other areas, including these new vaccine candidate trials by Moderna.
IPRally, a burgeoning startup out of Finland aiming to solve the patent search problem, has raised €2 million in seed funding.
Leading the round is by JOIN Capital, and Spintop Ventures, with participation from existing pre-seed backer Icebreaker VC. It brings the total raised by the 2018-founded company to €2.35 million.
Co-founded by CEO Sakari Arvela, who has 15 years experience as a patent attorney, IPRally has built a knowledge graph to help machines better understand the technical details of patents and to enable humans to more efficiently trawl through existing patients. The premise is that a graph-based approach is more suited to patent search than simple keywords or freeform text search.
That’s because, argues Arvela, every patent publication can be distilled down to a simpler knowledge graph that “resonates” with the way IP professionals think and is infinitely more machine readable.
“We founded IPRally in April 2018, after one year of bootstrapping and proof-of-concepting with my co-founder and CTO Juho Kallio,” he tells me. “Before that, I had digested the graph approach myself for about two years and collected the courage to start the venture”.
Arvela says patent search is a hard problem to solve since it involves both deep understanding of technology and the capability to compare different technologies in detail.
“This is why this has been done almost entirely manually for as long as the patent system has existed. Even the most recent out-of-the-box machine learning models are way too inaccurate to solve the problem. This is why we have developed a specific ML model for the patent domain that reflects the way human professionals approach the search task and make the problem sensible for the computers too”.
That approach appears to be paying off, with IPRally already being used by customers such as Spotify and ABB, as well as intellectual property offices. Target customers are described as any corporation that actively protects its own R&D with patents and has to navigate the IPR landscape of competitors.
Meanwhile, IPRally is not without its own competition. Arvela cites industry giants like Clarivate and Questel that dominate the market with traditional keyword search engines.
In addition, there are a few other AI-based startups, like Amplified and IPScreener. “IPRally’s graph approach makes the searches much more accurate, allows detail-level computer analysis, and offer a non-black-box solution that is explainable for and controllable by the user,” he adds.
Richard Socher, former chief scientist at Salesforce, who helped build the Einstein artificial intelligence platform, is taking on a new challenge — and it’s a doozy. Socher wants to fix consumer search and today he announced you.com, a new search engine to take on the mighty Google.
“We are building you.com. You can already go to it today. And it’s a trusted search engine. We want to work on having more click trust and less clickbait on the internet,” he said. He added that in addition to trust, he wants it to be built on kindness and facts, three worthy but difficult goals to achieve.
He said that there were several major issues that led him and his co-founders to build a new search tool. For starters, he says that there is too much information and nobody can possibly process it all. What’s more, as you find this information, it’s impossible to know what you can trust as accurate, and he believes that issue is having a major impact on society at large. Finally, as we navigate the internet in 2020, the privacy question looms large as is how you balance the convenience-privacy trade-off.
He believes his background in AI can help in a consumer-focused search tool. For starters the search engine, while general in nature, will concentrate on complex consumer purchases where you have to open several tabs to compare information.
“The biggest impact thing we can do in our lives right now is to build a trusted search engine with AI and natural language processing superpowers to help everyone with the various complex decisions of their lives, starting with complex product purchases, but also being general from the get go as well,” he said.
While Socher was light on details, preferring to wait until GA in a couple of months to share some more, he said he wants to differentiate from Google by not relying on advertising and what you know about the user. He said he learned from working with Marc Benioff at Salesforce that you can make money and still build trust with the people buying your product.
He certainly recognizes that it’s tough to take on an entrenched incumbent, but he and his team believe that by building something they believe is fundamentally different, they can undermine the incumbent with a classic “Innovator’s Dilemma” kind of play where they’re doing something that is hard for Google to reproduce without undermining their primary revenue model.
He also sees Google running into antitrust issues moving forward and that could help create an opening for a startup like this. “I think, a lot of stuff that Google [has been doing], I think with the looming antitrust will be somewhat harder for them to get away with on a continued basis,” he said.
He acknowledges that trust and accuracy elements could get tricky as social networks have found out. Socher hinted at some social sharing elements they plan to build into the search tool including allowing you to have your own custom you.com URL with your name to facilitate that sharing.
Socher said he has funding and a team together working actively on the product, but wouldn’t share how much or how many employees at this point. He did say that Benioff and venture capitalist Jim Breyer are primary backers and he would have more information to share in the coming months.
For now, if you’re interested, you can go to the website and sign up for early access.
As TC readers know, the tricky trade-off of the modern web is privacy for convenience. Online tracking is how this ‘great intimacy robbery’ is pulled off. Mass surveillance of what Internet users are looking at underpins Google’s dominant search engine and Facebook’s social empire, to name two of the highest profile ad-funded business models.
TechCrunch’s own corporate overlord, Verizon, also gathers data from a variety of end points — mobile devices, media properties like this one — to power its own ad targeting business.
Countless others rely on obtaining user data to extract some perceived value. Few if any of these businesses are wholly transparent about how much and what sort of private intelligence they’re amassing — or, indeed, exactly what they’re doing with it. But what if the web didn’t have to be like that?
Berlin-based Xayn wants to change this dynamic — starting with personalized but privacy-safe web search on smartphones.
Today it’s launching a search engine app (on Android and iOS) that offers the convenience of personalized results but without the ‘usual’ shoulder surfing. This is possible because the app runs on-device AI models that learn locally. The promise is no data is ever uploaded (though trained AI models themselves can be).
The team behind the app, which is comprised of 30% PhDs, has been working on the core privacy vs convenience problem for some six years (though the company was only founded in 2017); initially as an academic research project — going on to offer an open source framework for masked federated learning, called XayNet. The Xayn app is based on that framework.
They’ve raised some €9.5 million in early stage funding to date — with investment coming from European VC firm Earlybird; Dominik Schiener (Iota co-founder); and the Swedish authentication and payment services company, Thales AB.
Now they’re moving to commercialize their XayNet technology by applying it within a user-facing search app — aiming for what CEO and co-founder, Dr Leif-Nissen Lundbæk bills as a “Zoom”-style business model, in reference to the ubiquitous videoconferencing tool which has both free and paid users.
This means Xayn’s search is not ad-supported. That’s right; you get zero ads in search results.
Instead, the idea is for the consumer app to act as a showcase for a b2b product powered by the same core AI tech. The pitch to business/public sector customers is speedier corporate/internal search without compromising commercial data privacy.
Lundbæk argues businesses are sorely in need of better search tools to (safely) apply to their own data, saying studies have shown that search in general costs around 18% of working time globally. He also cites a study by one city authority that found staff spent 37% of their time at work searching for documents or other digital content.
“It’s a business model that Google has tried but failed to succeed,” he argues, adding: “We are solving not only a problem that normal people have but also that companies have… For them privacy is not a nice to have; it needs to be there otherwise there is no chance of using anything.”
On the consumer side there will also be some premium add-ons headed for the app — so the plan is for it to be a freemium download.
One key thing to note is Xayn’s newly launched web search app gives users a say in whether the content they’re seeing is useful to them (or not).
It does this via a Tinder-style swipe right (or left) mechanic that lets users nudge its personalization algorithm in the right direction — starting with a home screen populated with news content (localized by country) but also extending to the search result pages.
The news-focused homescreen is another notable feature. And it sounds like different types of homescreen feeds may be on the premium cards in future.
Another key feature of the app is the ability to toggle personalized search results on or off entirely — just tap the brain icon at the top right to switch the AI off (or back on). Results without the AI running can’t be swiped, except for bookmarking/sharing.
Elsewhere, the app includes a history page which lists searches from the past seven days (by default). The other options offered are: Today, 30 days, or all history (and a bin button to purge searches).
There’s also a ‘Collections’ feature that lets you create and access folders for bookmarks.
As you scroll through search results you can add an item to a Collection by swiping right and selecting the bookmark icon — which then opens a prompt to choose which one to add it to.
The swipe-y interface feels familiar and intuitive, if slightly laggy to load content in the TestFlight beta version TechCrunch checked out ahead of launch.
Swiping left on a piece of content opens a bright pink color-block stamped with a warning ‘x’. Keep going and you’ll send the item vanishing into the ether, presumably seeing fewer like it in future.
Whereas a swipe right affirms a piece of content is useful. This means it stays in the feed, outlined in Xayn green. (Swiping right also reveals the bookmark option and a share button.)
While there are pro-privacy/non-tracking search engines on the market already — such as US-based DuckDuckGo or France’s Qwant — Xayn argues the user experience of such rivals tends to fall short of what you get with a tracking search engine like Google, i.e. in terms of the relevance of search results and thus time spent searching.
Simply put: You probably have to spend more time ‘DDGing’ or ‘Qwanting’ to get the specific answers you need vs Googling — hence the ‘convenience cost’ associated with safeguarding your privacy when web searching.
Xayn’s contention is there’s a third, smarter way of getting to keep your ‘virtual clothes’ on when searching online. This involves implementing AI models that learn on-device and can be combined in a privacy-safe way so that results can be personalized without putting people’s data at risk.
“Privacy is the very fundament… It means that quite like other privacy solutions we track nothing. Nothing is sent to our servers; we don’t store anything of course; we don’t track anything at all. And of course we make sure that any connection that is there is basically secured and doesn’t allow for any tracking at all,” says Lundbæk, explaining the team’s AI-fuelled, decentralized/edge-computing approach.
Xayn is drawing on a number of search index sources, including (but not solely) Microsoft’s Bing, per Lundbæk, who described this bit of what it’s doing as “relatively similar” to DuckDuckGo (which has its own web crawling bots).
The big difference is that it’s also applying its own reranking algorithms in order generate privacy-safe personalized search results (whereas DDG uses a contextual ads-based business model — looking at simple signals like location and keyword search to target ads without needing to profile users).
The downside to this sort of approach, according to Lundbæk, is users can get flooded with ads — as a consequence of the simpler targeting meaning the business serves more ads to try to increase chances of a click. And loads of ads in search results obviously doesn’t make for a great search experience.
“We get a lot of results on device level and we do some ad hoc indexing — so we build on the device level and on index — and with this ad hoc index we apply our search algorithms in order to filter them, and only present you what is more relevant and filter out everything else,” says Lundbæk, sketching how Xayn works. “Or basically downgrade it a bit… but we also try to keep it fresh and explore and also bump up things where they might not be super relevant for you but it gives you some guarantees that you won’t end up in some kind of bubble.”
Some of what Xayn’s doing is in the arena of federated learning (FL) — a technology Google has been dabbling in in recent years, including pushing a ‘privacy-safe’ proposal for replacing third party tracking cookies. But Xayn argues the tech giant’s interests, as a data business, simply aren’t aligned with cutting off its own access to the user data pipe (even if it were to switch to applying FL to search).
Whereas its interests — as a small, pro-privacy German startup — are markedly different. Ergo, the privacy-preserving technology it’s spent years building has a credible interest in safeguarding people’s data, is the claim.
“At Google there’s actually [fewer] people working on federate learning than in our team,” notes Lundbæk, adding: “We’ve been criticizing TFF [Google-designed TensorFlow Federated] at lot. It is federated learning but it’s not actually doing any encryption at all — and Google has a lot of backdoors in there.
“You have to understand what does Google actually want to do with that? Google wants to replace [tracking] cookies — but especially they want to replace this kind of bumpy thing of asking for user consent. But of course they still want your data. They don’t want to give you any more privacy here; they want to actually — at the end — get your data even easier. And with purely federated learning you actually don’t have a privacy solution.
“You have to do a lot in order to make it privacy preserving. And pure TFF is certainly not that privacy-preserving. So therefore they will use this kind of tech for all the things that are basically in the way of user experience — which is, for example, cookies but I would be extremely surprised if they used it for search directly. And even if they would do that there is a lot of backdoors in their system so it’s pretty easy to actually acquire the data using TFF. So I would say it’s just a nice workaround for them.”
“Data is basically the fundamental business model of Google,” he adds. “So I’m sure that whatever they do is of course a nice step in the right direction… but I think Google is playing a clever role here of kind of moving a bit but not too much.”
So how, then, does Xayn’s reranking algorithm work?
The app runs four AI models per device, combining encrypted AI models of respective devices asynchronously — with homomorphic encryption — into a collective model. A second step entails this collective model being fed back to individual devices to personalize served content, it says.
The four AI models running on the device are one for natural language processing; one for grouping interests; one for analyzing domain preferences; and one for computing context.
“The knowledge is kept but the data is basically always staying on your device level,” is how Lundbæk puts it.
“We can simply train a lot of different AI models on your phone and decide whether we, for example, combine some of this knowledge or whether it also stays on your device.”
“We have developed a quite complex solution of four different AI models that work in composition with each other,” he goes on, noting that they work to build up “centers of interest and centers of dislikes” per user — again, based on those swipes — which he says “have to be extremely efficient — they have to be moving, basically, also over time and with your interests”.
The more the user interacts with Xayn, the more precise its personalization engine gets as a result of on-device learning — plus the added layer of users being able to get actively involved by swiping to give like/dislike feedback.
The level of personalization is very individually focused — Lundbæk calls it “hyper personalization” — more so than a tracking search engine like Google, which he notes also compares cross-user patterns to determine which results to serve — something he says Xayn absolutely does not do.
“We have to focus entirely on one user so we have a ‘small data’ problem, rather than a big data problem,” says Lundbæk. “So we have to learn extremely fast — only from eight to 20 interactions we have to already understand a lot from you. And the crucial thing is of course if you do such a rapid learning then you have to take even more care about filter bubbles — or what is called filter bubbles. We have to prevent the engine going into some kind of biased direction.”
To avoid this echo chamber/filter bubble type effect, the Xayn team has designed the engine to function in two distinct phases which it switches between: Called ‘exploration’ and (more unfortunately) ‘exploitation’ (i.e. just in the sense that it already knows something about the user so can be pretty certain what it serves will be relevant).
“We have to keep fresh and we have to keep exploring things,” he notes — saying that’s why it developed one of the four AIs (a dynamic contextual multi-armed bandit reinforcement learning algorithm for computing context).
Aside from this app infrastructure being designed natively to protect user privacy, Xayn argues there are a bunch of other advantages — such as being able to derive potentially very clear interests signs from individuals; and avoiding the chilling effect that can result from tracking services creeping users out (to the point people they avoid making certain searches in order to prevent them from influencing future results).
“You as the user can decide whether you want the algorithm to learn — whether you want it to show more of this or less of this — by just simply swiping. So it’s extremely easy, so you can train your system very easily,” he argues.
There is potentially a slight downside to this approach, too, though — assuming the algorithm (when on) does some learning by default (i.e in the absence of any life/dislike signals from the user).
This is because it puts the burden on the user to interact (by swiping their feedback) in order to get the best search results out of Xayn. So that’s an active requirement on users, rather than the typical passive background data mining and profiling web users are used to from tech giants like Google (which is, however, horrible for their privacy).
It means there’s an ‘ongoing’ interaction cost to using the app — or at least getting the most relevant results out of it. You might not, for instance, be advised to let a bunch of organic results just scroll past if they’re really not useful but rather actively signal disinterest on each.
For the app to be the most useful it may ultimately pay to carefully weight each item and provide the AI with a utility verdict. (And in a competitive battle for online convenience every little bit of digital friction isn’t going to help.)
Asked about this specifically, Lundbæk told us: “Without swiping the AI only learns from very weak likes but not from dislikes. So the learning takes place (if you turn the AI on) but it’s very slight and does not have a big effect. These conditions are quite dynamic, so from the experience of liking something after having visited a website, patterns are learned. Also, only 1 of the 4 AI models (the domain learning one) learns from pure clicks; the others don’t.”
Xayn does seem alive to the risk of the swiping mechanic resulting in the app feeling arduous. Lundbæk says the team is looking to add “some kind of gamification aspect” in the future — to flip the mechanism from pure friction to “something fun to do”. Though it remains to be seen what they come up with on that front.
There is also inevitably a bit of lag involved in using Xayn vs Google — by merit of the former having to run on-device AI training (whereas Google merely hoovers your data into its cloud where it’s able to process it at super-speeds using dedicated compute hardware, including bespoke chipsets).
“We have been working for over a year on this and the core focus point was bringing it on the street, showing that it works — and of course it is slower than Google,” Lundbæk concedes.
“Google doesn’t need to do any of these [on-device] processes and Google has developed even its own hardware; they developed TPUs exactly for processing this kind of model,” he goes on. “If you compare this kind of hardware it’s pretty impressive that we were even able to bring [Xayn’s on-device AI processing] even on the phone. However of course it’s slower than Google.”
Lundbæk says the team is working on increasing the speed of Xayn. And anticipates further gains as it focuses more on that type of optimization — trailing a version that’s 40x faster than the current iteration.
“It won’t at the end be 40x faster because we will use this also to analyze even more content — to give you can even broader view — but it will be faster over time,” he adds.
On the accuracy of search results vs Google, he argues the latter’s ‘network effect’ competitive advantage — whereby its search reranking benefits from Google having more users — is not unassailable because of what edge AI can achieve working smartly atop ‘small data’.
Though, again, for now Google remains the search standard to beat.
“Right now we compare ourselves, mostly against Bing and DuckDuckGo and so on. Obviously there we get much better results [than compared to Google] but of course Google is the market leader and is using quite some heavy personalization,” he says, when we ask about benchmarking results vs other search engines.
“But the interesting thing is so far Google is not only using personalization but they also use kind of a network effect. PageRank is very much a network effect where the most users they have the better the results get, because they track how often people click on something and bump this also up.
“The interesting effect there is that right now, through AI technology — like for example what we use — the network effect becomes less and less important. So actually I would say that there isn’t really any network effect anymore if you really want to compete with pure AI technology. So therefore we can get almost as relevant results as Google right now and we surely can also, over time, get even better results or competing results. But we are different.”
In our (brief) tests of the beta app Xayn’s search results didn’t obviously disappoint for simple searches (and would presumably improve with use). Though, again, the slight load lag adds a modicum of friction which was instantly obvious compared to the usual search competition.
Not a deal breaker — just a reminder that performance expectations in search are no cake walk (even if you can promise a cookie-free experience).
“So far Google has so far had the advantage of a network effect — but this network effect gets less and less dominant and you see already more and more alternatives to Google popping up,” Lundbæk argues, suggesting privacy concerns are creating an opportunity for increased competition in the search space.
“It’s not anymore like Facebook or so where there’s one network where everyone has to be. And I think this is actually a nice situation because competition is always good for technical innovations and for also satisfying different customer needs.”
Of course the biggest challenge for any would-be competitor to Google search — which carves itself a marketshare in Europe in excess of 90% — is how to poach (some of) its users.
Lundbæk says the startup has no plans to splash millions on marketing at this point. Indeed, he says they want to grow usage sustainably, with the aim of evolving the product “step by step” with a “tight community” of early adopters — relying on cross-promotion from others in the pro-privacy tech space, as well as reaching out to relevant influencers.
He also reckons there’s enough mainstream media interest in the privacy topic to generate some uplift.
“I think we have such a relevant topic — especially now,” he says. “Because we want to show also not only for ourselves that you can do this for search but we think we show a real nice example that you can do this for any kind of case.
“You don’t always need the so-called ‘best’ big players from the US which are of course getting all of your data, building up profiles. And then you have these small, cute privacy-preserving solutions which don’t use any of this but then offer a bad user experience. So we want to show that this shouldn’t be the status quo anymore — and you should start to build alternatives that are really build on European values.”
And it’s certainly true EU lawmakers are big on tech sovereignty talk these days, even though European consumers mostly continue to embrace big (US) tech.
Perhaps more pertinently, regional data protection requirements are making it increasing challenging to rely on US-based services for processing data. Compliance with the GDPR data protection framework is another factor businesses need to consider. All of which is driving attention onto ‘privacy-preserving’ technologies.
Xayn’s team is hoping to be able spread its privacy-preserving gospel to general users by growing the b2b side of the business, according to Lundbæk — so it’s hoping some home use will follow once employees get used to convenient private search via their workplaces, in a small-scale reverse of the business consumerization trend that was powered by modern smartphones (and people bringing their own device to work).
“We these kind of strategies I think we can step by step build up in our communities and spread the word — so we think we don’t even need to really spend millions of euros in marketing campaigns to get more and more users,” he adds.
While Xayn’s initial go-to-market push has been focused on getting the mobile apps out, a desktop version is also planned for Q1 next year.
The challenge there is getting the app to work as a browser extension as the team obviously doesn’t want to build its own browser to house Xayn. tl;dr: Competing with Google search is mountain enough to climb, without trying to go after Chrome (and Firefox, and so on).
“We developed our entire AI in Rust which is a safe language. We are very much driven by security here and safety. The nice thing is it can work everywhere — from embedded systems towards mobile systems, and we can compile into web assembly so it runs also as a browser extension in any kind of browser,” he adds. “Except for Internet Explorer of course.”
A group of Singaporean government agencies is launching a new research program for blockchain technology with $12 million SGD (about $8.9 million USD) in funding. Called the Singapore Blockchain Innovation Programme (SBIP), the project is a collaboration between Enterprise Singapore, Infocomm Media Development Authority and the National Research Foundation Singapore. It has support from the Monetary Authority of Singapore, the country’s central bank and financial regulator.
SBIP’s funding comes from the National Research Foundation, and will be used to develop, commercialize and encourage the adoption of blockchain technology by companies. The program will first focus on the use of blockchain in trade, logistics and the supply chain.
According to a press release, the program “will engage close to 75 companies” over the next three years. It is already working with Dimuto, a global supply chain platform, to use blockchain technology to trace perishables with the goal of improving farmers’ creditworthiness.
The program’s other plans include finding ways to help blockchain systems and networks collaborate with one another, and growing the blockchain sector’s talent pool.
While companies ranging from startups to giants like IBM have been exploring the use of blockchain technology to create more transparent and cohesive supply chains for years, the issue has become more urgent as the COVID-19 pandemic highlighted vulnerabilities in international logistics and supply chains.
In a statement, Peter Ong, the chairman of Enterprise Singapore, said “COVID-19 has emphasized the need for trusted and reliable business systems in the new digital world. Blockchain technology helps embed trust in applications spanning logistics and supply chains, trade financing to digital identities and credentials.”
Singapore’s government is positioning itself as a partner to blockchain developers and companies, with the goal of becoming a “crypto hub” that is more open to the technology than other countries. Other blockchain-related government initiatives include the Monetary Authority of Singapore’s Project Ubin. Launched in 2016, Project Ubin announced in July that its multi-currency payments network had proved its commercial potential after tests with more than 40 companies.
People are getting frustrated that Stories are everywhere now, but Google Maps is keeping it old school. Instead of adding tiny circles to the top of the app’s screen, Google Maps is introducing its own news feed. Technically, Google calls its new feature the “Community Feed,” as it includes posts from a local area. However, it’s organized as any other news feed would be — a vertically scrollable feed with posts you can “Like” by tapping on a little thumbs up icon.
The feed, which is found with the Explore tab of the Google Maps app, is designed to make it easier to find the most recent news, updates, and recommendations from trusted local sources. This includes posts business owners create using Google My Business to alert customers to new deals, menu updates, and other offers. At launch, Google says the focus will be on highlighting posts from food and drink businesses.
For years, businesses have been able to make these sorts of posts using Google’s tools. But previously, users would have to specifically tap to follow the business’s profile in order to receive their updates.
Now, these same sort of posts will be surfaced to even those Google Maps users who didn’t take the additional step of following a particular business. This increased exposure has impacted the posts’ views, Google says. In early tests of Community Feed ahead of its public launch, Google found that businesses’ posts saw more than double the number of views than before the feed existed.
Image Credits: Google
In addition to posts from businesses, the new Community Feed will feature content posted by Google users you follow as well as recent reviews from Google’s Local Guides — the volunteer program where users share their knowledge about local places in order to earn perks, such as profile badges, early access to Google features, and more. Select publishers will participate in the Community Feed, too, including The Infatuation and other news sources from Google News, when relevant.
Much of the information found in the Community Feed was available elsewhere in Google Maps before today’s launch.
For example, the Google Maps’ Updates tab offered a similar feed that included businesses’ posts along with news, recommendations, stories, and other features designed to encourage discovery. Meanwhile, the Explore tab grouped businesses into thematic groupings (e.g. outdoor dining venues, cocktail bars, etc.) at the top of the screen, then allowed users to browse other lists and view area photos.
With the update, those groups of businesses by category will still sit at the top of the screen, but the rest of the tab is dedicated to the scrollable feed. This gives the tab a more distinct feel than it had before. It could even position Google to venture into video posts in the future, given the current popularity of TikTok-style short-form video feeds that have now cloned by Instagram and Snapchat.
Image Credits: Google
Today, it’s a more standard feed, however. As you scroll down, you can tap “Like” on those posts you find interesting to help better inform your future recommendations. You can also tap “Follow” on businesses you want to hear more from, which will send their alerts to your Updates tab, as well. Thankfully, there aren’t comments.
Google hopes the change will encourage users to visit the app more often in order to find out what’s happening in their area — whether that’s a new post from a business or a review from another user detailing some fun local activity, like a day trip or new hiking spot, for example.
The feature can be used when traveling or researching other areas, too, as the “Community Feed” you see is designated not based on where you live or your current location, but rather where you’re looking on the map.
The feed is the latest in what’s been a series of updates designed to make Google Maps more of a Facebook rival. Over the past few years, Google Maps has added features that allowed users to follow businesses, much like Facebook does, as well as message those businesses directly in the app, similar to Messenger. Businesses, meanwhile, have been able to set up their own profile in Google Maps, where they could add a logo, cover photo, and pick short name — also a lot like Facebook Pages offer today.
With the launch of a news feed-style feature, Google’s attempt to copy Facebook is even more obvious.
Google says the feature is rolling out globally on Google Maps for iOS and Android.
AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.
At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.
The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.
For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.
“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.
Drugmaker Moderna has completed its initial efficacy analysis of its COVID-19 vaccine from the drug’s Phase 3 clinical study, and determined that it was 94.1% effective in preventing people from contracting COVID-19 across 196 confirmed cases from among 30,000 participants in the study. Moderna also found that it was 100% effective in preventing severe cases (such as those that would require hospitalization) and says it hasn’t found any significant safety concerns during the trial. On the basis of these results, the company will file an application for emergency use authorization (EUA) with the U.S. Food and Drug Administration (FDA) on Monday.
Seeking an EUA is the next step towards actually beginning to distribute and administer Moderna’s COVID-19 vaccine, and if granted the authorization, it will be able to provide it to high-risk individuals in settings where it could help prevent more deaths, such as with front-line healthcare workers, ahead of receiving a full and final regulatory approval from the U.S. healthcare monitoring agency. Moderna will also seek conditional approval from the European Medicines Agency, which will enable similar use ing the EU.
Moderna’s vaccine is an mRNA vaccine, which provides genetic instructions to a person’s body that prompts them to create their own powerful antibodies to block the receptor sites that allows COVID-19 to infect a patient. It’s a relatively new therapeutic approach for human use, but has the potential to provide potentially even more resistance to COVID-19 than do natural antibodies, and without the risk associated with introducing any actual virus, active or otherwise, to an inoculated individual in order to prompt their immune response.
In mid-November, Moderna announced that its COVID-19 vaccine showed 94.5% efficacy in its preliminary results. This final analysis of that same data hews very close to the original, which is promising news for anyone hoping for an effective solution to be available soon. This data has yet to be peer reviewed, though Moderna says that it will now be submitting data from the Phase 3 study to a scientific publication specifically for that purpose.
Moderna’s vaccine candidate is part of the U.S’s Operation Warp Speed program to expedite the development, production and distribution of a COVID-19 vaccine, initiated earlier this year as a response to the unprecedented global pandemic. Other vaccines, including one created by Pfizer working with partner BioNTech, as well as an Oxford University/AstraZeneca-developed candidate, are also far along in their Phase 3 testing and readying for emergency approval and use. Pfizer has already applied with the FDA for its own EUA, while the Oxford vaccine likely won’t be taking that step in the U.S. until it completes another round of final testing after discovering an error in the dosage of its first trial – which led to surprising efficacy results.
AstraZeneca’s CEO told Bloomberg that the pharmaceutical company will likely conduct another global trial of the effectiveness of its COVID-19 vaccine trial, following the disclosure that the more effective dosage in the existing Phase 3 clinical trial was actually administered by accident. AstraZeneca and its partner the University of Oxford reported interim results that showed 62% efficacy for a full two-dose regimen, and a 90% efficacy rate for a half-dose followed by a full dose – which the scientists developing the drug later acknowledged was actually just an accidental administration of what was supposed to be two full doses.
To be clear, this shouldn’t dampen anyone’s optimism about the Oxford/AstraZeneca vaccine. The results are still very promising, and an additional trial is being done only to ensure that what was seen as a result of the accidental half-dosage is actually borne out when the vaccine is administered that way intentionally. That said, this could extend the amount of time that it takes for the Oxford vaccine to be approved in the U.S., since this will proceed ahead of a planned U.S. trial that would be required for the FDA to approve it for use domestically.
The Oxford vaccine’s rollout to the rest of the world likely won’t be affected, according to AstraZeneca’s CEO, since the studies that have been conducted, including safety data, are already in place from participants around the world outside of the U.S.
While vaccine candidates from Moderna and Pfizer have also shown very strong efficacy in early Phase 3 data, hopes are riding high on the AstraZeneca version because it relies on a different technology, can be stored and transported at standard refrigerator temperatures rather than frozen, and costs just a fraction per dose compared to the other two leading vaccines in development.
That makes it an incredibly valuable resource for global inoculation programs, including distribution where cost and transportation infrastructures are major concerns.
3D-rendered faces are a big part of any major movie or game now, but the task of capturing and animated them in a natural way can be a tough one. Disney Research is working on ways to smooth out this process, among them a machine learning tool that makes it much easier to generate and manipulate 3D faces without dipping into the uncanny valley.
Of course this technology has come a long way from the wooden expressions and limited details of earlier days. High resolution, convincing 3D faces can be animated quickly and well, but the subtleties of human expression are not just limitless in variety, they’re very easy to get wrong.
Think of how someone’s entire face changes when they smile — it’s different for everyone, but there are enough similarities that we fancy we can tell when someone is “really” smiling or just faking it. How can you achieve that level of detail in an artificial face?
Existing “linear” models simplify the subtlety of expression, making “happiness” or “anger” minutely adjustable, but at the cost of accuracy — they can’t express every possible face, but can easily result in impossible faces. Newer neural models learn complexity from watching the interconnectedness of expressions, but like other such models their workings are obscure and difficult to control, and perhaps not generalizable beyond the faces they learned from. They don’t enable the level of control an artist working on a movie or game needs, or result in faces that (humans are remarkably good at detecting this) are just off somehow.
A team at Disney Research proposes a new model with the best of both worlds — what it calls a “semantic deep face model.” Without getting into the exact technical execution, the basic improvement is that it’s a neural model that learns how a facial expression affects the whole face, but is not specific to a single face — and moreover is nonlinear, allowing flexibility in how expressions interact with a face’s geometry and each other.
Think of it this way: A linear model lets you take an expression (a smile, or kiss, say) from 0-100 on any 3D face, but the results may be unrealistic. A neural model lets you take a learned expression from 0-100 realistically, but only on the face it learned it from. This model can take an expression from 0-100 smoothly on any 3D face. That’s something of an over-simplification, but you get the idea.
The results are powerful: You could generate a thousand faces with different shapes and tones, and then animate all of them with the same expressions without any extra work. Think how that could result in diverse CG crowds you can summon with a couple clicks, or characters in games that have realistic facial expressions regardless of whether they were hand-crafted or not.
It’s not a silver bullet, and it’s only part of a huge set of improvements artists and engineers are making in the various industries where this technology is employed — markerless face tracking, better skin deformation, realistic eye movements, and dozens more areas of interest are also important parts of this process.
The Disney Research paper was presented at the International Conference on 3D Vision; you can read the full thing here.
Slack shares are up just under 25% at the moment, according to Yahoo Finance data. Slack is worth $36.95 per share as of the time of writing, valuing it at around $20.8 billion. The well-known former unicorn has been worth as little as $15.10 per share inside the last year and worth as much as $40.07.
Inversely, shares of Salesforce are trading lower on the news, falling around 3.5% as of the time of writing. Investors in the San Francisco-based SaaS pioneer were either unimpressed at the combination idea, or perhaps worried about the price that would be required to bring the 2019 IPO into their fold.
Why Salesforce, a massive software company with a strong position in the CRM market, and aspirations of becoming an even larger platform player, would want to buy Slack is not immediately clear though there are possible benefits. This includes the possibility of cross-selling the two companies products’ into each others customer bases, possibly unlocking growth for both parties. Slack has wide marketshare inside of fast-growing startups, for example, while Salesforce’s products roost inside a host of megacorps.
TechCrunch reached out to Salesforce, Slack and Slack’s CEO for comment on the deal’s possibility. We’ll update this post with whatever we get.
While Salesforce bought Quip for $750 million in 2016, which gave it a kind of document sharing and collaboration, Salesforce Chatter has been the only social tool in the company’s arsenal. Buying Slack would give the CRM giant solid enterprise chat footing and likely a lot of synergy among customers and tooling.
But Slack has always been more than a mere chat client. It enables companies to embed workflows, and this would fit well in the Salesforce family of products, which spans sales, service, marketing and more. It would allow companies to work both inside and outside the Salesforce ecosystem, building smooth and integrated workflows. While it can theoretically do that now, if the two were combined, you can be sure the integrations would be much tighter.
What’s more, Holger Mueller, an analyst at Constellation Research says it would give Salesforce a sticky revenue source, something they are constantly searching for to keep their revenue engine rumbling along. “Slack could be a good candidate to strengthen its platform, but more importantly account for more usage and ‘stickiness’ of Salesforce products — as collaboration not only matters for CRM, but also for the vendor’s growing work.com platform,” Mueller said. He added that it would be a way to stick it to former-friend-turned-foe Microsoft.
That’s because Slack has come under withering fire from Microsoft in recent quarters, as the Redmond-based software giant poured resources into its competing Teams service. Teams challenges Slack’s chat tooling and Zoom’s video features and has seen huge customer growth in recent quarters.
Finding Slack a corporate home amongst the larger tech players could ensure that Microsoft doesn’t grind it under the bulk of its enterprise software sales leviathan. And Salesforce, a sometimes Microsoft ally, would not mind adding the faster-growing Slack to its own expanding software income.
The question at this juncture comes down to price. Slack investors won’t want to sell for less than a good premium on the pre-pop per-share price, which now feels rather dated.
Oxford University’s COVID-19 vaccine, being developed in partnership with drugmaker AstraZeneca, has shown to be 70.4% effective in preliminary results from its Phase 3 clinical trial. That rate actually includes data from two different approaches to dosing, including one where two full strength does were applied, which was 62% effective, and a much more promising dosage trial which used one half-dose and one full strength dose to follow – that one was 90% effective.
Oxford’s results may not have the eye-catching high efficacy headline totals of the recent announcements from Pfizer and Moderna, but they could actually represent some of the most promising yet for a few different reasons. First, if that second dosage strategy holds true across later results and further analysis, it means that the Oxford vaccine can be administered in lower amounts and provide stronger efficacy (there’s no reason to use the full two-dose method if it’s that much less effective).
Second, the Oxford vaccine can be stored and transported at standard refrigerator temperatures – between 35° and 45°F – whereas the other two vaccine candidates require storage at lower temperatures. That helps obviate the need for more specialized equipment during transportation and on-site at clinics and hospitals where it will be administered.
Oxford’s COVID-19 vaccine also uses a different approach to either Moderna’s or Pfizer’s, which are both mRNA vaccines. That’s a relatively unproven technology when it comes to human therapeutics, which involves using messenger RNA to provide blueprints to a person’s body to build proteins effective at blocking a virus, without any virus present. The Oxford University candidate is an adenovirus vaccine, which is a much more established technology that’s already been in use for decades, and which involves genetically altering a weekend common cold virus and using that to trigger a person’s own natural immune response.
Finally, it’s also cheaper – in part because it uses tried and tested technology for which there’s already a robust and mature supply chain, and in part because it’s easier to transport and store.
The Phase 3 trial for the Oxford vaccine included 24,000 participants, and it’s expected to grow to 60,000 by the end of the year. Safety data so far shows no significant risks, and among the 131 confirmed cases in the interim analysis that produced these results, none of those who received either vaccine dosage developed a severe case, or one requiring hospitalization.
This is great news for potential vaccination programs, since it introduces variety of supply chain into an apparently effective vaccine treatment for COVID-19. We’re much better off if we have not only multiple effective vaccines, but multiple different types of effective vaccines, in terms of being able to inoculate widely as quickly as possible.
Two of the companies behind one of the leading COVID-19 vaccine candidates will seek approval from the U.S. Food and Drug Administration for emergency use authorization (EUA) of their preventative treatment with an application to be delivered today. Pfizer and BioNTech, who revealed earlier this week that their vaccine was 95% effective based on Phase 3 clinical trial data, are submitting for the emergency authorization in the U.S., as well as in Australia, Canada, Europe, Japan and the U.K., and says that could pave the way for use of the vaccine to begin in “high-risk populations” by the end of next month.
The FDA’s EUA program allows therapeutics companies to seek early approval when mitigating circumstances are met, as is the case with the current global pandemic. EUA’s still require that supporting information and safety data are provided, but they are fast-tracked relative to the full, formal and more permanent approval process typically used for new drugs and treatments that come before they’re able to actually be administered broadly.
Pfizer and BioNTech’s vaccine candidate, which is an mRNA-based vaccine that essentially provides a recipient’s body with instructions on how to produce specific proteins to block the ability of SARS-CoV-19 (the virus that causes COVID-19) to attach to cells. The vaccine has recently been undergoing a Phase 3 clinical trial, that included 43,661 participants so far. The companies are submitting supporting information they hope will convince the FDA to grant the EUA, including data from 170 confirmed cases from among the participants, and safety information actively solicited from 8,000 participants, and supplementary data form another 38,000 who that was passively collected.
While production is ramping globally for this and other vaccines in late stage development, and EUA will potentially open up access to high-risk individuals including frontline healthcare workers, it’s worth pointing out that any wide vaccination programs likely aren’t set to begin until next year, and likely later in 2021.
Microsoft announced a few updates to its Edge browser today that are all about shopping. In addition to expanding the price comparison feature the team announced last month, Edge can now also automatically find coupons for you. In addition, the company is launching a new shopping hub in its Bing search engine. The timing here is undoubtedly driven by the holiday shopping season — though this year, it feels like Black Friday-style deals already started weeks ago.
The potential usefulness of the price comparison tools is pretty obvious. I’ve found this always worked reasonably well in Edge Collections — though at times it could also be a frustrating experience because it just wouldn’t pull any data for items you saved from some sites. Now, with this price comparison running in the background all the time, you’ll see a new badge pop up in the URL bar that lets you open the price comparison. And when you already found the best price, it’ll tell you that right away, too.
At least in the Edge Canary, where this has been available for a little bit already, this was also hit and miss. It seems to work just fine when you shop on Amazon, for example, as long as there’s only one SKU of an item. If there are different colors, sizes or other options available, it doesn’t really seem to kick in, which is a bit frustrating.
The coupons feature, too, is a bit of a disappointment. It works more consistently and seems to pull data from most of the standard coupon sites (think RetailMeNot and Slickdeals), but all it does is show sitewide coupons. Since most coupons only apply to a limited set of items, clicking on the coupon badge quickly feels like a waste of time. To be fair, the team implemented a nifty feature where at checkout, Bing will try to apply all of the coupons it found. That could be a potential time and money-saver. Given the close cooperation with the Bing team in other areas, this feels like an area of improvement, though. I turned it off.
Microsoft is also using today’s announcement to launch a new URL shortener in Edge. “Now, when you paste a link that you copied from the address bar, it will automatically convert from a long, nonsensical URL address to a short hyperlink with the website title. If you prefer the full URL, you can convert to plain text using the context menu,” Microsoft explains. I guess that makes sense in some scenarios. Most of the time, though, I just want the link (and no third-party in-between), so I hope this can easily be turned off, too.
Drugmaker Pfizer has provided updated analysis around its COVID-19 vaccine Phase 3 clinical trial data, saying that in the final result of its analysis of the 44,000-participant trial, its COVID-19 vaccine candidate proved 95% percent effective. This is a better efficacy rate than Pfizer reported previously, when it announced a 90% effectiveness metric based on preliminary analysis of the Phase 3 trial data.
This result also follows a preliminary data report from Moderna about their own Phase 3 trial of their vaccine candidate, which they reported showed 94.5% effectiveness. Pfizer and partner BioNTech’s vaccine is an mRNA-based preventative treatment, similar to the Moderna one, and now it looks like they should be roughly similar in efficacy – at least in the early offing, based on a limited sample of total cases and prior to peer review by the scientific community, which is yet to come.
The Pfizer data in its final analysis shows that among a total of 170 confirmed COVID-19 cases so far among the 44,000 people who took part in the study, 162 cases came from the placebo group while only eight were from the group of those who received the actual vaccine candidate. The company also reported that 9 out of 10 of the severe cases among those who were infected occurred in the placebo group, suggesting that even in the rare occasion that the vaccine didn’t prevent contraction of COVID-19, it helped reduce its severity.
This should help Pfizer make its case that it be granted an Emergency Use Authorization (EUA) from the U.S. Food and Drug Administration (FDA) to be able to provide the vaccine early pending full and final approval as an emergency measure. Earlier this week, the company reported that it has already collected two months’ worth of follow-up data about participants in its trial, which is a required component for said approval, and it’s pursuing it with hopes of achieving that EUA before year’s end. The company intends to ramp production of its vaccine beginning later this year, and achieving a run rate of up to 1.3 billion doses by next year.
Following fast on the heels of Pfizer’s announcement of its COVID-19 vaccine efficacy, Moderna is also sharing positive results from its Phase 3 trial on Monday. The biotech company says that its COVID-19 vaccine candidate has shown efficacy of 94.5% in its first interim data analysis, which covers 95 confirmed COVID cases among its study participants, of which 90 were given the placebo, and only 5 received Moderna’s mRNA-based vaccine. Further, of 11 severe cases of COVID-19, none were found among those who received the actual vaccine candidate.
This is another very promising sign for the potential of having effective vaccines available to the public in some kind of significant volume at some point next year. As mentioned, it’s worth pointing out that this is just a first interim report, but it is data that comes from the safety board overseeing the trial appointed by the National Institutes of Health, which is an independent body not affiliated with Moderna, so it’s a reliable result that provides hope for continued and final analysis.
Moderna says that it will be submitting for an Emergency Use Authorization of its vaccine candidate based on the results within the coming weeks, looking to get approval from the FDA to use it in emergency circumstances ahead of a full and final approval. That EUA, should it be granted, will be based on data from 151 confirmed cases among the Phase 3 participant group (which included 30,000 participants in total), and data from follow-ups extending on average over two months after case confirmation.
All final data will also be submitted to the scientific community for independent peer review, which is a standard part of the ultimate vaccine trial and approval process.
Both this and Pfizer’s vaccine candidate, which it developed in partnership with BioNTech, are mRNA-based vaccines. These are relatively new in terms of human use, and differ from traditional vaccines in that they use messenger RNA to instruct a recipient’s cells to generate effective antibodies, without actually exposing them to any virus, whereas more traditional vaccines in general use typically use either small, safe doses of active or inactive virus in order to trigger a patient’s immune system to generate their own antibodies.
Carbon Health has raised a $100 million Series C funding round, led by Dragoneer Investment Group and including participation from prior investors Brookfield Technology Partners, DCVC and Builders VC. This funding will be used to help the SF-based healthcare provider startup to continue to expand its nationwide footprint, including with the opening of 100 pop-up clinics planned for across 20 markets across the U.S.
This past year has seen Carbon Health expand from just seven clinics to 27, spread out across six different states. The company, which focuses on primary care, has also introduced virtual care options with an emphasis on what it calls “omnichannel” care, or offering services in whatever method is most convenient, effective and appropriate for its customers. The startup has always aimed at a hybrid care approach, but it’s emphasizing the flexibility of its model in response to COVID-19, and has in particular accelerated its plans around its pop-up clinics.
These are deployed in under-utilized spaces in regions where additional care options are needed, including parking lots and garages. Carbon Health partnered early with Reef Technology on opening these locations, using shipping-container style mobile trailers to provide on-site care. Carbon Health founder and CEO Eren Bali explained to me that while remote care can be very effective, in some instances, it requires some nurse practitioner support with virtual physician-guided services to provide a complete solution for customers.
The company is also looking to support greater testing capacity using this model, and eventually looking ahead to providing an infrastructure that can help with widespread COVID-19 vaccine distribution, once one is ready to go. While some scientific results this week have been very promising, including with Pfizer’s Phase 3 clinical trial, ultimately the effort of undertaking a national vaccine inoculation program will require cooperation among many stakeholders, including primary care providers.