FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Google acquires Actifio to step into the area of data management and business continuity

By Ingrid Lunden

In the same week that Amazon is holding its big AWS confab, Google is also announcing a move to raise its own enterprise game with Google Cloud. Today the company announced that it is acquiring Actifio, a data management company that helps companies with data continuity to be better prepared in the event of a security breach or other need for disaster recovery. The deal squares Google up as a competitor against the likes of Rubrik, another big player in data continuity.

The terms of the deal were not disclosed in the announcement; we’re looking and will update as we learn more. Notably, when the company was valued at over $1 billion in a funding round back in 2014, it had said it was preparing for an IPO (which never happened). PitchBook data estimated its value at $1.3 billion in 2018, but earlier this year it appeared to be raising money at about a 60% discount to its recent valuation, according to data provided to us by Prime Unicorn Index.

The company was also involved in a patent infringement suit against Rubrik, which it also filed earlier this year.

It had raised around $461 million, with investors including Andreessen Horowitz, TCV, Tiger, 83 North, and more.

With Actifio, Google is moving into what is one of the key investment areas for enterprises in recent years. The growth of increasingly sophisticated security breaches, coupled with stronger data protection regulation, has given a new priority to the task of holding and using business data more responsibly, and business continuity is a cornerstone of that.

Google describes the startup as as a “leader in backup and disaster recovery” providing virtual copies of data that can be managed and updated for storage, testing, and more. The fact that it covers data in a number of environments — including SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, virtual machines (VMs) in VMware, Hyper-V, physical servers, and of course Google Compute Engine — means that it also gives Google a strong play to work with companies in hybrid and multi-vendor environments rather than just all-Google shops.

“We know that customers have many options when it comes to cloud solutions, including backup and DR, and the acquisition of Actifio will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios,” writes Brad Calder, VP, engineering, in the blog post. :In addition, we are committed to supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”

The company will join Google Cloud.

“We’re excited to join Google Cloud and build on the success we’ve had as partners over the past four years,” said Ash Ashutosh, CEO at Actifio, in a statement. “Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”

AWS updates its edge computing solutions with new hardware and Local Zones

By Frederic Lardinois

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

AWS goes after Microsoft’s SQL Server with Babelfish for Aurora PostgreSQL

By Frederic Lardinois

AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.

What Babelfish does is provide a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will (though they’ll still have to migrate their existing data). It provides translations for the dialect, but also SQL commands,  cursors, catalog views, data types, triggers, stored procedures and functions.

The promise here is that companies won’t have to replace their database drivers or rewrite and verify their database requests to make this transition.

“We believe Babelfish stands out because it’s not another migration service, as useful as those can be. Babelfish enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements,” AWS’s Matt Asay writes in today’s announcement. “This means much faster ‘migrations’ with minimal developer effort. It’s also centered on ‘correctness,’ meaning applications designed to use SQL Server functionality will behave the same on PostgreSQL as they would on SQL Server.”

PostgreSQL, AWS rightly points out, is one of the most popular open-source databases in the market today. A lot of companies want to migrate their relational databases to it — or at least use it in conjunction with their existing databases. This new service is going to make that significantly easier.

The open-source Babelfish project will launch in 2021 and will be available on GitHub under the Apache 2.0 license.

“It’s still true that the overwhelming majority of relational databases are on-premise,” AWS CEO Andy Jassy said. “Customers are fed up with and sick of incumbents.” As is tradition at re:Invent, Jassy also got a few swipes at Oracle into his keynote, but the real target of the products the company is launching in the database area today is clearly Microsoft.

Is Slack overpriced now that the market knows Salesforce might buy it?

By Alex Wilhelm

The Exchange is technically off today, but we’re here anyway because there’s neat stuff in the world of startups and money to talk about. So, let’s yammer this morning about Slack’s new valuation and what the market is telling us about what the venerable SaaS company is really worth.


The Exchange explores startups, markets and money. Read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


Recall that on Wednesday, news broke that Salesforce is considering buying Slack, a move that has potential merit and some question marks.

The merits could include bringing Slack’s startup mindshare to Salesforce and bringing Salesforce’s enterprise reach to Slack. In terms of questions, precisely how Slack fits into Salesforce’s CRM-and-platform play isn’t clear; Salesforce’s own Slack-ish competitor, Chatter, hasn’t taken control of its market in the 10-plus years since its release (here’s TechCrunch covering its launch back in 2009), making the possible home of Slack inside Salesforce slightly suspect.

Still, Slack investors cheered the concept of Salesforce paying up for their company, while investors in the latter company knocked nearly $20 off its share price, perhaps worried about the very thing that Slack’s owners were stoked to consider.

So, price. What’s Slack worth? This is a question that’s fun in both academic terms and also for understanding the current dynamics in the software M&A market — what do you have to pay to take a large chess piece off the software market’s board?

Let’s take a look at what we can learn from Slack’s pre-news price, and its current, changed valuation.

What’s it worth?

Here’s a chart of Slack’s value before and after the Salesforce news, just to give you a taste of how big an impact the reporting had:

Gift Guide: Which next-gen console is the one your kid wants?

By Devin Coldewey

This holiday season the next generation of gamers, bless their hearts, will be hoping to receive the next generation of gaming consoles. But confusing branding by the console makers — not to mention a major shortage of consoles — could lead to disappointment during the unwrapping process. Before making any big promises this year, you’ll want to be completely clear on two things: which console you’re actually trying to get, and how much of a challenge it might be to get one.

By the way, it’s totally understandable if you’re a little lost — particularly on Microsoft’s end, the branding is a little weird this time around. Even the lifelong gamers on our staff have mixed up the various Xbox names a few times.

If you’re not 100% sure which brand of console your kid (or partner or whoever) has, go take a look right now. An Xbox will have a big X somewhere on a side without cables coming out of it, and a PS4 will have a subtler “PS” symbol embossed on it. The “Pro” has three “layers” and the regular one has two.

Okay, now that you know what you’ve got, here are the new versions that they want:

Sony PlayStation 5

The PlayStation 5, or PS5 for short, is the newest gaming console from Sony. It’s the one your kids want if they already have a PlayStation 4 or even a PlayStation 4 Pro, which they might have gotten a year or two back.

The PS5 is more powerful than the PS4, but it also plays most PS4 games, so you don’t need to worry about a game you just bought for a birthday or whatever. It has some fancy new features for fancy new TVs, but you don’t need to worry about that — the improved performance is the main draw.

There are two versions of the PS5, and the only real difference between them is that one has a disc drive for playing disc-based games; they both come with a controller and are about the same size. The one with the drive costs $500, and is the one you should choose if you’re not completely certain the recipient would prefer the driveless “Digital Edition.” Saving $100 up front is enticing, but consider that some titles for this generation will cost $70, so the capability to buy used games at half price might pay for itself pretty quickly.

The PS5 doesn’t come with any “real” next-generation games, and the selection this season is going to be pretty slim. But your best bet for pretty much any gamer is Spider-Man: Miles Morales. I’ve played it and its predecessor — which Miles Morales comes with — and it’s going to be the one everyone wants right off the bat. (Its violence is pretty PG, like the movies.)

The PS4’s controllers sadly won’t work on PS5 games. But don’t worry about getting any extra ones or charging stands or whatnot right now, unless your gamer plays a lot of games with other people on the couch already.

Microsoft Xbox Series X

The Xbox Series X is the latest gaming console from Microsoft, replacing the Xbox One X and One S. Yes, the practice of changing the middle word instead of the last initial is difficult to understand, and it will be the reason lots of kids unwrap last year’s new console instead of this year’s.

The Xbox Series X is more powerful than the Xbox One X, but should also play almost all the old games, so if you bought something recently, don’t worry that it won’t be compatible. There are lots of fancy-sounding new features, but you don’t need to worry about those or buy them separately — stuff like HDR and 4K all depend on your TV, but any TV from the last few years will look great.

There are two versions of the next Xbox, and they have significant differences. The $500 Xbox Series X is the “real” version, with a disc drive for old and used games, and all the power-ups Microsoft has advertised. This one is almost certainly the one any gamer will be expecting and hoping to get.

Like Sony, Microsoft has a version of the Xbox that has no disc drive: the $300 Xbox Series S. Confusingly, this is the same price, same color, and nearly the same name and type of console as last generation’s Xbox One S, so first of all be sure you’re not buying the One. The Xbox Series S is definitely “next-gen,” but has a bit less power than the Series X, and so will have a few compromises in addition to the lack of a drive. It’s not recommended you get this one unless you know what you’re doing or really need that $200 (understandable).

For a day-one game, there isn’t really a big must-have exclusive. Assassin’s Creed: Valhalla is probably a good bet, though, if bloody violence is okay. If not, honestly a gift certificate or subscription to the “Game Pass” service that provides free games is fine.

No need for extra controllers — the Xbox Series X supports the last-gen’s controllers. Genuine thanks to Microsoft for that one.

Difficulty level: Holiday 2020

A PS5 and controller.

Image Credits: Devin Coldewey / TechCrunch

Now that you know which console to get (again… a PlayStation 5 or an Xbox Series X), I’ve got some bad news and some good news.

The bad news is they’re probably (read: definitely) going to be sold out. Microsoft and Sony are pumping these things out as fast as they can, but the truth is they really rushed this launch to make it in time for the holidays and won’t have enough to go around.

Resist the urge to buy the “next best” in last year’s model — the new ones are a major change and are replacements, not just upgrades, for the old ones. It would literally be better for a kid to receive a pre-order receipt for a new console than a brand new old one. And don’t go wild trying to find one on eBay or whatever — this is going to be a very scammy season and it’s better to avoid that scene entirely.

The pandemic also means you probably can’t or won’t want to wait in line all night to grab a unit in person. Getting a console will almost certainly involve spending a good amount of time on the websites of the major retailers… and a good bit of luck.  Follow electronics and gaming shops on Twitter and bookmark the consoles’ pages to check for availability regularly, but expect each shipment to be sold out within a minute or two and for the retailer’s website to crash every single time.

Don’t buy them a Nintendo Switch, either, unless they’ve asked for one of course. The Switch is fantastic, but it’s completely different from the consoles above.

The good news is they won’t be missing out on much right now. Almost every game worth having for the next year will be available on the new and old consoles, and in some cases players may be able to start their game on one and continue it on the next. Good luck figuring out exactly which games will be enhanced, upgraded, or otherwise carried between generations (it’s a patchwork mess), but any of the hot new games is a good bet.

Good luck!

 

Microsoft brings new shopping tools to its Edge browser

By Frederic Lardinois

Microsoft announced a few updates to its Edge browser today that are all about shopping. In addition to expanding the price comparison feature the team announced last month, Edge can now also automatically find coupons for you. In addition, the company is launching a new shopping hub in its Bing search engine. The timing here is undoubtedly driven by the holiday shopping season — though this year, it feels like Black Friday-style deals already started weeks ago.

Image Credits: Microsoft

The potential usefulness of the price comparison tools is pretty obvious. I’ve found this always worked reasonably well in Edge Collections — though at times it could also be a frustrating experience because it just wouldn’t pull any data for items you saved from some sites. Now, with this price comparison running in the background all the time, you’ll see a new badge pop up in the URL bar that lets you open the price comparison. And when you already found the best price, it’ll tell you that right away, too.

At least in the Edge Canary, where this has been available for a little bit already, this was also hit and miss. It seems to work just fine when you shop on Amazon, for example, as long as there’s only one SKU of an item. If there are different colors, sizes or other options available, it doesn’t really seem to kick in, which is a bit frustrating.

Image Credits: Microsoft

The coupons feature, too, is a bit of a disappointment. It works more consistently and seems to pull data from most of the standard coupon sites (think RetailMeNot and Slickdeals), but all it does is show sitewide coupons. Since most coupons only apply to a limited set of items, clicking on the coupon badge quickly feels like a waste of time. To be fair, the team implemented a nifty feature where at checkout, Bing will try to apply all of the coupons it found. That could be a potential time  and money-saver. Given the close cooperation with the Bing team in other areas, this feels like an area of improvement, though. I turned it off.

Microsoft is also using today’s announcement to launch a new URL shortener in Edge. “Now, when you paste a link that you copied from the address bar, it will automatically convert from a long, nonsensical URL address to a short hyperlink with the website title. If you prefer the full URL, you can convert to plain text using the context menu,” Microsoft explains. I guess that makes sense in some scenarios. Most of the time, though, I just want the link (and no third-party in-between), so I hope this can easily be turned off, too.

Yeah, Apple’s M1 MacBook Pro is powerful, but it’s the battery life that will blow you away

By Matthew Panzarino

Survival and strategy games are often played in stages. You have the early game where you’re learning the ropes, understanding systems. Then you have mid-game where you’re executing and gathering resources. The most fun part, for me, has always been the late mid-game where you’re in full control of your powers and skills and you’ve got resources to burn — where you execute on your master plan before the endgame gets hairy.

This is where Apple is in the game of power being played by the chip industry. And it’s about to be endgame for Intel. 

Apple has introduced three machines that use its new M1 system on a chip, based on over a decade’s worth of work designing its own processing units based on the ARM instructions set. These machines are capable, assured and powerful, but their greatest advancements come in the performance per watt category.

I personally tested the 13” M1 MacBook Pro and after extensive testing, it’s clear that this machine eclipses some of the most powerful Mac portables ever made in performance while simultaneously delivering 2x-3x the battery life at a minimum. 

These results are astounding, but they’re the product of that long early game that Apple has played with the A-series processors. Beginning in earnest in 2008 with the acquisition of PA Semiconductor, Apple has been working its way towards unraveling the features and capabilities of its devices from the product roadmaps of processor manufacturers.  

The M1 MacBook Pro runs smoothly, launching apps so quickly that they’re often open before your cursor leaves your dock. 

Video editing and rendering is super performant, only falling behind older machines when it leverages the GPU heavily. And even then only with powerful dedicated cards like the 5500M or VEGA II. 

Compiling projects like WebKit produce better build times than nearly any machine (hell the M1 Mac Mini beats the Mac Pro by a few seconds). And it does it while using a fraction of the power. 

This thing works like an iPad. That’s the best way I can describe it succinctly. One illustration I have been using to describe what this will feel like to a user of current MacBooks is that of chronic pain. If you’ve ever dealt with ongoing pain from a condition or injury, and then had it be alleviated by medication, therapy or surgery, you know how the sudden relief feels. You’ve been carrying the load so long you didn’t know how heavy it was. That’s what moving to this M1 MacBook feels like after using other Macs. 

Every click is more responsive. Every interaction is immediate. It feels like an iOS device in all the best ways. 

At the chip level, it also is an iOS device. Which brings us to…

iOS on M1

The iOS experience on the M1 machines is…present. That’s the kindest thing I can say about it. Apps install from the App Store and run smoothly, without incident. Benchmarks run on iOS apps show that they perform natively with no overhead. I even ran an iOS-based graphics benchmark which showed just fine. 

That, however, is where the compliments end. The current iOS app experience on an M1 machine running Big Sur is almost comical; it’s so silly. There is no default tool-tip that explains how to replicate common iOS interactions like swipe-from-edge — instead a badly formatted cheat sheet is buried in a menu. The apps launch and run in windows only. Yes, that’s right, no full-screen iOS apps at all. It’s super cool for a second to have instant native support for iOS on the Mac, but at the end of the day this is a marketing win, not a consumer experience win. 

Apple gets to say that the Mac now supports millions of iOS apps, but the fact is that the experience of using those apps on the M1 is sub-par. It will get better, I have no doubt. But the app experience on the M1 is pretty firmly in this order right now: Native M1 app>Rosetta 2 app>Catalyst app> iOS app. Provided that the Catalyst ports can be bothered to build in Mac-centric behaviors and interactions, of course. But it’s clear that iOS, though present, is clearly not where it needs to be on M1.

Rosetta 2

There is both a lot to say and not a lot to say about Rosetta 2. I’m sure we’ll get more detailed breakdowns of how Apple achieved what it has with this new emulation layer that makes x86 applications run fine on the M1 architecture. But the real nut of it is that it has managed to make a chip so powerful that it can take the approximate 26% hit (see the following charts) in raw power to translate apps and still make them run just as fast if not faster than MacBooks with Intel processors. 

It’s pretty astounding. Apple would like us to forget the original Rosetta from the PowerPC transition as much as we would all like to forget it. And I’m happy to say that this is pretty easy to do because I was unable to track any real performance hit when comparing it to older, even ‘more powerful on paper’ Macs like the 16” MacBook Pro. 

It’s just simply not a factor in most instances. And companies like Adobe and Microsoft are already hard at work bringing native M1 apps to the Mac, so the most needed productivity or creativity apps will essentially get a free performance bump of around 30% when they go native. But even now they’re just as fast. It’s a win-win situation. 

Methodology

My methodology  for my testing was pretty straightforward. I ran a battery of tests designed to push these laptops in ways that reflected both real world performance and tasks as well as synthetic benchmarks. I ran the benchmarks with the machines plugged in and then again on battery power to estimate constant performance as well as performance per watt. All tests were run multiple times with cooldown periods in between in order to try to achieve a solid baseline. 

Here are the machines I used for testing:

  • 2020 13” M1 MacBook Pro 8-core 16GB
  • 2019 16” Macbook Pro 8-core 2.4GHz 32GB w/5500M
  • 2019 13” MacBook Pro 4-core 2.8GHz 16GB
  • 2019 Mac Pro 12-Core 3.3GHz 48GB w/AMD Radeon Pro Vega II 32GB

Many of these benchmarks also include numbers from the M1 Mac mini review from Matt Burns and the M1 MacBook Air, tested by Brian Heater which you can check out here.

Compiling WebKit

Right up top I’m going to start off with the real ‘oh shit’ chart of this piece. I checked WebKit out from GitHub and ran a build on all of the machines with no parameters. This is the one deviation from the specs I mentioned above as my 13” had issues that I couldn’t figure out so I had some Internet friends help me

As you can see, the M1 performs admirably well across all models, with the MacBook and Mac Mini edging out the MacBook Air. This is a pretty straightforward way to visualize the difference in performance that can result in heavy tasks that last over 20 minutes, where the MacBook Air’s lack of active fan cooling throttles back the M1 a bit. Even with that throttling, the MacBook Air still beats everything here except for the very beefy MacBook Pro. 

But, the big deal here is really this second chart. After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery. In comparison, I could have gotten through about 3 on the 16” and the 13” 2020 model only had one go in it. 

This insane performance per watt of power is the M1’s secret weapon. The battery performance is simply off the chart. Even with processor-bound tasks. To give you an idea, throughout this build of WebKit the P-cluster (the power cores) hit peak pretty much every cycle while the E-cluster (the efficiency cores) maintained a steady 2GHz. These things are going at it, but they’re super power efficient.

Battery Life

In addition to charting battery performance in some real world tests, I also ran a couple of dedicated battery tests. In some cases they ran so long I thought I had left it plugged in by mistake, it’s that good. 

I ran a mixed web browsing and web video playback script that hit a series of pages, waited for 30 seconds and then moved on to simulate browsing. The results return a pretty common sight in our tests, with the M1 outperforming the other MacBooks by just over 25%.

In fullscreen 4k/60 video playback, the M1 fares even better, clocking an easy 20 hours with fixed 50% brightness. On an earlier test, I left the auto-adjust on and it crossed the 24 hour mark easily. Yeah, a full day. That’s an iOS-like milestone.

The M1 MacBook Air does very well also, but its smaller battery means a less playback time at 16 hours. Both of them absolutely decimated the earlier models.

Xcode Unzip

This was another developer-centric test that was requested. Once again, CPU bound, and the M1’s blew away any other system in my test group. Faster than the 8-core 16” MacBook Pro, wildly faster than the 13” MacBook Pro and yes, 2x as fast as the 2019 Mac Pro with its 3.3GHz Xeons. 

Image Credits: TechCrunch

For a look at the power curve (and to show that there is no throttling of the MacBook Pro over this period (I never found any throttling over longer periods by the way) here’s the usage curve.

Unified Memory and Disk Speed

Much ado has been made of Apple including only 16GB of memory on these first M1 machines. The fact of it, however, is that I have been unable to push them hard enough yet to feel any effect of this due to Apple’s move to unified memory architecture. Moving RAM to the SoC means no upgradeability — you’re stuck on 16GB forever. But it also means massively faster access 

If I was a betting man I’d say that this was an intermediate step to eliminating RAM altogether. It’s possible that a future (far future, this is the play for now) version of Apple’s M-series chips could end up supplying memory to each of the various chips from a vast pool that also serves as permanent storage. For now, though, what you’ve got is a finite, but blazing fast, pool of memory shared between the CPU cores, GPU and other SoC denizens like the Secure Enclave and Neural Engine. 

While running many applications simultaneously, the M1 performed extremely well. Because this new architecture is so close, with memory being a short hop away next door rather than out over a PCIE bus, swapping between applications was zero issue. Even while tasks were run in the background — beefy, data heavy tasks — the rest of the system stayed flowing.

Even when the memory pressure tab of Activity Monitor showed that OS X was using swap space, as it did from time to time, I noticed no slowdown in performance. 

Though I wasn’t able to trip it up I would guess that you would have to throw a single, extremely large file at this thing to get it to show any amount of struggle. 

The SSD in the M1 MacBook Pro is running on a PCIE 3.0 bus, and its write and read speeds indicate that. 

 

Thunderbolt

The M1 MacBook Pro has two Thunderbolt controllers, one for each port. This means that you’re going to get full PCIE 4.0 speeds out of each and that it seems very likely that Apple could include up to 4 ports in the future without much change in architecture. 

This configuration also means that you can easily power an Apple Pro Display XDR and another monitor besides. I was unable to test two Apple Pro Display XDR monitors side-by-side.

Cooling and throttling

No matter how long the tests I ran were, I was never able to ascertain any throttling of the CPU on the M1 MacBook Pro. From our testing it was evident that in longer operations (20-40 minutes on up) it was possible to see the MacBook Air pulling back a bit over time. Not so with the Macbook Pro. 

Apple says that it has designed a new ‘cooling system’ in the M1 MacBook Pro, which holds up. There is a single fan but it is noticeably quieter than either of the other fans. In fact, I was never able to get the M1 much hotter than ‘warm’ and the fan ran at speeds that were much more similar to that of a water cooled rig than the turbo engine situation in the other MacBooks. 

Even running a long, intense Cinebench 23 session could not make the M1 MacBook get loud. Over the course of the mark running all high-performance cores regularly hit 3GHz and the efficiency cores hitting 2GHz. Despite that, it continued to run very cool and very quiet in comparison to other MacBooks. It’s the stealth bomber at the Harrier party.

In that Cinebench test you can see that it doubles the multi-core performance of last year’s 13” MacBook and even beats out the single-core performance of the 16” MacBook Pro. 

I ran a couple of Final Cut Pro tests with my test suite. First was a 5 minute 4k60 timeline shot with iPhone 12 Pro using audio, transitions, titles and color grading. The M1 Macbook performed fantastic, slightly beating out the 16” MacBook Pro. 

 

 

With an 8K timeline of the same duration, the 16” MacBook Pro with its Radeon 5500M was able to really shine with FCP’s GPU acceleration. The M1 held its own though, showing 3x faster speeds than the 13” MacBook Pro with its integrated graphics. 

 

And, most impressively, the M1 MacBook Pro used extremely little power to do so. Just 17% of the battery to output an 81GB 8k render. The 13” MacBook Pro could not even finish this render on one battery charge. 

As you can see in these GFXBench charts, while the M1 MacBook Pro isn’t a powerhouse gaming laptop we still got some very surprising and impressive results in tests of the GPU when a rack of Metal tests were run on it. The 16″ MBP still has more raw power, but rendering games at retina is still very possible here.

The M1 is the future of CPU design

All too often over the years we’ve seen Mac releases hamstrung by the capabilities of the chips and chipsets that were being offered by Intel. Even as recently as the 16” MacBook Pro, Apple was stuck a generation or more behind. The writing was basically on the wall once the iPhone became such a massive hit that Apple began producing more chips than the entire rest of the computing industry combined. 

Apple has now shipped over 2 billion chips, a scale that makes Intel’s desktop business look like a luxury manufacturer. I think it was politic of Apple to not mention them by name during last week’s announcement, but it’s also clear that Intel’s days are numbered on the Mac and that their only saving grace for the rest of the industry is that Apple is incredibly unlikely to make chips for anyone else.

Years ago I wrote an article about the iPhone’s biggest flaw being that its performance per watt limited the new experiences that it was capable of delivering. People hated that piece but I was right. Apple has spent the last decade “fixing” its battery problem by continuing to carve out massive performance gains via its A-series chips all while maintaining essentially the same (or slightly better) battery life across the iPhone lineup. No miracle battery technology has appeared, so Apple went in the opposite direction, grinding away at the chip end of the stick.

What we’re seeing today is the result of Apple flipping the switch to bring all of that power efficiency to the Mac, a device with 5x the raw battery to work with. And those results are spectacular.

Microsoft reveals Pluton, a custom security chip built into Intel, AMD, and Qualcomm processors

By Zack Whittaker

For the past two years, some of the world’s biggest chip makers have battled a series of hardware flaws, like Meltdown and Spectre, which made it possible — though not easy — to pluck passwords and other sensitive secrets directly from their processors. The chip makers rolled out patches, but required the companies to rethink how they approach chip security.

Now, Microsoft thinks it has the answer with its new security chip, which it calls Pluton. The chip, announced today, is the brainchild of a partnership between Microsoft, and chip makers Intel, AMD, and Qualcomm.

Pluton acts as a hardware root-of-trust, which in simple terms protects a device’s hardware from tampering, such as from hardware implants or by hackers exploiting flaws in the device’s low-level firmware. By integrating the chip inside future Intel, AMD, and Qualcomm central processor units, or CPUs, it makes it far more difficult for hackers with physical access to a computer to launch hardware attacks and extract sensitive data, the companies said.

“The Microsoft Pluton design will create a much tighter integration between the hardware and the Windows operating system at the CPU that will reduce the available attack surface,” said David Weston, director of enterprise and operating system security at Microsoft.

Microsoft said Pluton made its first appearance in the Xbox One back in 2013 to make it far more difficult to hack the console or allow gamers to run pirated games. The chip later graduated to Microsoft’s cloud service Azure Sphere, used to secure low-cost Internet of Things devices.

The idea now is to bring that same technology, with some improvements, to new Windows 10 devices.

The chip comes with immediate benefits, like making hardware attacks against Windows devices far more difficult to succeed. But the chip also solves a major security headache by keeping the device’s firmware up-to-date.

Whether or not the Pluton chip can stand the test of time is another matter. Most of the chip vulnerability research has been done by third-party researchers through extensive, and often tedious work. Microsoft’s Weston said the Pluton chip has undergone a security stress-test by its own internal red team and by external vendors. But that could come back to haunt the company if it got something wrong. Case in point: just last month, security researchers found an “unfixable” security flaw in Apple’s T2 security chip — a custom-built chip in most modern Macs that’s analogous to Microsoft’s Pluton — that could open up Macs to the very security threats that the chip is supposed to prevent.

Microsoft declined to say if it planned to offer the Pluton chip designs to other chip makers or if it planned to make the designs open source for anyone to use, but said it plans to share more details in the future, leaving the door open to the possibility.

Which emerging technologies are enterprise companies getting serious about in 2020?

By Walter Thompson
Scott Kirsner Contributor
Scott Kirsner is CEO and co-founder of Innovation Leader, a research and events firm that focuses on innovation in Global 1000 companies, and a longtime business columnist for The Boston Globe.

Startups need to live in the future. They create roadmaps, build products and continually upgrade them with an eye on next year — or even a few years out.

Big companies, often the target customers for startups, live in a much more near-term world. They buy technologies that can solve problems they know about today, rather than those they may face a couple bends down the road. In other words, they’re driving a Dodge, and most tech entrepreneurs are driving a DeLorean equipped with a flux-capacitor.

That situation can lead to a huge waste of time for startups that want to sell to enterprise customers: a business development black hole. Startups are talking about technology shifts and customer demands that the executives inside the large company — even if they have “innovation,” “IT,” or “emerging technology” in their titles — just don’t see as an urgent priority yet, or can’t sell to their colleagues.

How do you avoid the aforementioned black hole? Some recent research that my company, Innovation Leader, conducted in collaboration with KPMG LLP, suggests a constructive approach.

Rather than asking large companies about which technologies they were experimenting with, we created four buckets, based on what you might call “commitment level.” (Our survey had 211 respondents, 62% of them in North America and 59% at companies with greater than $1 billion in annual revenue.) We asked survey respondents to assess a list of 16 technologies, from advanced analytics to quantum computing, and put each one into one of these four buckets. We conducted the survey at the tail end of Q3 2020.

Respondents in the first group were “not exploring or investing” — in other words, “we don’t care about this right now.” The top technology there was quantum computing.

Bucket #2 was the second-lowest commitment level: “learning and exploring.” At this stage, a startup gets to educate its prospective corporate customer about an emerging technology — but nabbing a purchase commitment is still quite a few exits down the highway. It can be constructive to begin building relationships when a company is at this stage, but your sales staff shouldn’t start calculating their commissions just yet.

Here are the top five things that fell into the “learning and exploring” cohort, in ranked order:

  1. Blockchain.
  2. Augmented reality/mixed reality.
  3. Virtual reality.
  4. AI/machine learning.
  5. Wearable devices.

Technologies in the third group, “investing or piloting,” may represent the sweet spot for startups. At this stage, the corporate customer has already discovered some internal problem or use case that the technology might address. They may have shaken loose some early funding. They may have departments internally, or test sites externally, where they know they can conduct pilots. Often, they’re assessing what established tech vendors like Microsoft, Oracle and Cisco can provide — and they may find their solutions wanting.

Here’s what our survey respondents put into the “investing or piloting” bucket, in ranked order:

  1. Advanced analytics.
  2. AI/machine learning.
  3. Collaboration tools and software.
  4. Cloud infrastructure and services.
  5. Internet of things/new sensors.

By the time a technology is placed into the fourth category, which we dubbed “in-market or accelerating investment,” it may be too late for a startup to find a foothold. There’s already a clear understanding of at least some of the use cases or problems that need solving, and return-on-investment metrics have been established. But some providers have already been chosen, based on successful pilots and you may need to dislodge someone that the enterprise is already working with. It can happen, but the headwinds are strong.

Here’s what the survey respondents placed into the “in-market or accelerating investment” bucket, in ranked order:

Python creator Guido van Rossum joins Microsoft

By Frederic Lardinois

Guido van Rossum, the creator of the Python programming language, today announced that he has unretired and joined Microsoft’s Developer Division.

Van Rossum, who was last employed by Dropbox, retired last October after six and a half years at the company. Clearly, that retirement wasn’t meant to last. At Microsoft, van Rossum says, he’ll work to “make using Python better for sure (and not just on Windows).”

A Microsoft spokesperson told us that the company also doesn’t have any additional details to share but confirmed that van Rossum has indeed joined Microsoft. “We’re excited to have him as part of the Developer Division. Microsoft is committed to contributing to and growing with the Python community, and Guido’s on-boarding is a reflection of that commitment,” the spokesperson said.

The Dutch programmer started working on what would become Python back in 1989. He continued to actively work on the language during his time at the U.S. National Institute of Standards and Technology in the mid-90s and at various companies afterward, including as Director of PythonLabs at BeOpen and Zope and at Elemental Security. Before going to Dropbox, he worked for Google from 2005 to 2012. There, he developed the internal code review tool Mondrian and worked on App Engine.

I decided that retirement was boring and have joined the Developer Division at Microsoft. To do what? Too many options to say! But it’ll make using Python better for sure (and not just on Windows :-). There’s lots of open source here. Watch this space.

— Guido van Rossum (@gvanrossum) November 12, 2020

Today, Python is among the most popular programming languages and the de facto standard for AI researchers, for example.

Only a few years ago, van Rossum joining Microsoft would’ve been unthinkable, given the company’s infamous approach to open source. That has clearly changed now and today’s Microsoft is one of the most active corporate open-source contributors among its peers — and now the owner of GitHub . It’s not clear what exactly van Rossum will do at Microsoft, but he notes that there’s “too many options to say” and that “there’s lots of open source here.”

PUBG announces India return plan with new game and $100 million investment

By Manish Singh

PUBG Mobile plans to return in India in a new avatar, parent company PUBG Corporation said on Thursday. TechCrunch reported last week that the South Korean gaming firm was plotting its return to the world’s second largest internet market two months after its marquee title was banned by the country.

The new game, called PUBG Mobile India, has been specially created for users in India, PUBG Corporation said. It did not share when it plans to release the title.

Additionally, the company — and its parent firm, KRAFTON — said they plan to make an investment worth $100 million in India, one of the largest markets of PUBG Mobile, to cultivate the local video game, esports, entertainment and IT industries ecosystems. It also plans for more than 100 employees in the country.

“Thanks to overwhelming community enthusiasm for PUBG esports in India, the company also plans to make investments by hosting India-exclusive esports events, which will feature the biggest tournaments, the largest prize pools, and the best tournament productions,” it said in a statement.

New Delhi has banned more than 200 apps with links to China — including PUBG Mobile and TikTok — in recent months because of cybersecurity concerns. The ban was enforced as tensions escalated on the nations’ disputed border.

To allay concerns of the Indian government, PUBG Mobile cut ties with Chinese internet giant Tencent — which is its publisher in many markets — in India days after the order. Last week it inked a global deal with Microsoft to move all PUBG Mobile data — as well as data from its other properties — to Azure. Microsoft operates three cloud regions in India.

In a statement today, PUBG Corporation said, “privacy and security of Indian player data being a top priority for PUBG Corporation, the company will conduct regular audits and verifications on the storage systems holding Indian users’ personally identifiable information to reinforce security and ensure that their data is safely managed.”

Prior to the ban in early September, PUBG Mobile had amassed over 50 million monthly active users in India, more than any other mobile game in the country. It helped establish an entire ecosystem of esports organisations and even a cottage industry of streamers that made the most of its spectator sport-friendly gameplay, said Rishi Alwani, a longtime analyst of the Indian gaming market and publisher of news outlet The Mako Reactor.

PUBG Corporation’s move today could also set a precedence for other impacted apps to chart their returns to the country. One thing — and perhaps the most crucial element in all of this — that remains unclear for now is whether the Indian government has approved PUBG Corporation’s move.

Not surprising to see PUBG Corp take this route.

PUBG Mobile was the #1 grossing mobile game in India, and its ban was felt by both the developers and players.

This new custom version will aim to satisfy regulators, but it remains to be seen if this will be enough. https://t.co/uQDc2qK5B5

— Daniel Ahmad (@ZhugeEX) November 12, 2020

Europe urges e-commerce platforms to share data in fight against coronavirus scams

By Natasha Lomas

European lawmakers are pressing major e-commerce and media platforms to share more data with each other as a tool to fight rogue traders who are targeting consumers with coronavirus scams.

After the pandemic spread to the West, internet platforms were flooded with local ads for PPE of unknown and/or dubious quality and other dubious coronavirus offers — even after some of the firms banned such advertising.

The concern here is not only consumers being ripped off but the real risk of harm if people buy a product that does not offer the protection claimed against exposure to the virus or even get sold a bogus coronavirus “cure” when none in fact exists.

In a statement today, Didier Reynders, the EU commissioner for justice, said: “We know from our earlier experience that fraudsters see this pandemic as an opportunity to trick European consumers. We also know that working with the major online platforms is vital to protect consumers from their illegal practices. Today I encouraged the platforms to join forces and engage in a peer-to-peer exchange to further strengthen their response. We need to be even more agile during the second wave currently hitting Europe.”

The Commission said Reynders met with 11 online platforms today — including Amazon, Alibaba/AliExpress, eBay, Facebook, Google, Microsoft/Bing, Rakuten and (TechCrunch’s parent entity) Verizon Media/Yahoo — to discuss new trends and business practices linked to the pandemic and push the tech companies to do more to head off a new wave of COVID-19 scams.

In March this year EU Member States’ consumer protection authorities adopted a common position on the issue. The Commission and a pan-EU network of consumer protection enforcers has been in regular contact with the 11 platforms since then to push for a coordinated response to the threat posed by coronavirus scams.

The Commission claims the action has resulted in the platforms reporting the removal of “hundreds of millions” of illegal offers and ads. It also says they have confirmed what it describes as “a steady decline” in new coronavirus-related listings, without offering more detailed data.

In Europe, tighter regulations over what e-commerce platforms sell are coming down the pipe.

Next month regional lawmakers are set to unveil a package of legislation that will propose updates to existing e-commerce rules and aim to increase their legal responsibilities, including around illegal content and dangerous products.

In a speech last week, Commission EVP Margrethe Vestager, who heads up the bloc’s digital policy, said the Digital Services Act (DSA) will require platforms to take more responsibility for dealing with illegal content and dangerous products, including by standardizing processes for reporting illegal content and dealing with reports and complaints related to content.

A second legislative package that’s also due next month — the Digital Markets Act — will introduce additional rules for a sub-set of platforms considered to hold a dominant market position. This could include requirements that they make data available to rivals, with the aim of fostering competition in digital markets.

MEPs have also pushed for a “know your business customer” principle to be included in the DSA.

Simultaneously, the Commission has been pressing for social media platforms to open up about what it described in June as a coronavirus “infodemic” — in a bid to crack down on COVID-19-related disinformation.

Today the Commission gave an update on actions taken in the month of September by Facebook, Google, Microsoft, Twitter and TikTok to combat coronavirus disinformation — publishing its third set of monitoring reports. Thierry Breton, commissioner for the internal market, said more needs to be done there too.

“Viral spreading of disinformation related to the pandemic puts our citizens’ health and safety at risk. We need even stronger collaboration with online platforms in the coming weeks to fight disinformation effectively,” he said in a statement. 

The platforms are signatories of the EU’s (non-legally binding) Code of Practice on disinformation.

Legally binding transparency rules for platforms on tackling content such as illegal hate speech look set to be part of the DSA package. Though it remains to be seen how the fuzzier issue of “harmful content” (such as disinformation attached to a public health crisis) will be tackled.

A European Democracy Action Plan to address the disinformation issue is also slated before the end of the year.

In a pointed remark accompanying the Commission’s latest monitoring reports today, Vera Jourová, VP for values and transparency, said: “Platforms must step up their efforts to become more transparent and accountable. We need a better framework to help them do the right thing.”

Review: Sony’s PlayStation 5 is here, but next-generation gaming is still on its way

By Devin Coldewey

The new generation of consoles is both a hard and an easy sell. With a big bump to specs and broad backwards compatibility, both the PlayStation 5 and Xbox Series X are certainly the consoles anyone should buy going forward. But with nearly no launch content or must-have features, they also fail to make a compelling case for themselves beyond “the same, but better.” What we’re left with is something more like a new iPhone: You’ll have to upgrade eventually, and it’ll be fine. Just don’t believe the hype for the new consoles… yet.

Disclosure: TechCrunch was provided consoles from both Microsoft and Sony ahead of release, as well as a handful of titles from first and third-party publishers.

In accordance with an elaborate (and ongoing!) series of embargoes for different features and games, impressions have been trickling out about the new platforms for a month now. For a launch that’s already lacking impact, this may have further blunted excitement: Few gamers will get excited when all anyone can write about is the exterior of the console itself, or the first level of the pack-in game. Some features wouldn’t even be available before launch, or are prohibited from coverage until long afterwards, leaving reviewers wondering whether day-one changes would make obsolete any impressions they had. (I’ll update this review when new information comes to light, or link to future coverage.)

But whatever the case, the shackles are finally removed and now we can talk about most (but not all) the new consoles have to offer. Unfortunately it’s… not that much. Despite the companies’ attempts to hype the next generation as a huge leap, there’s simply no evidence of that at launch and probably won’t be for many months.

That doesn’t mean the new platforms are a flop — or even that they aren’t great. But the new generation is a lot like the old one, and compatibility with it is actually the biggest thing the PS5 and Series X have going for them for the opening stretch. Here’s what I can tell you honestly about my time with the PS5.

The hardware: Conversation piece

A PS5 console with a PS4 on top

As you can see, the PS5 is CONSIDERABLY larger than the PS4 Slim. Image Credits: Devin Coldewey / TechCrunch

The PS5 is a strange-looking beast, but I’ll give it this: No one is going to mistake it for any other gaming console. Though they may think it’s an air purifier.

The large, curvy device likely won’t fit with anyone’s decor, so it may be best to just bite the bullet and display it prominently (fortunately it sits comfortably vertically or on a stand horizontally). I look forward to getting custom shields for the side to make this thing a little less prominent.

The console is fairly quiet while playing games, but you’ll probably want it at least a few feet away from you, especially if you’re going to play with a disc, which is much louder than normal operation.

As for performance, it’s really impossible to say. The only “next-gen” (really cross-gen) game I got to play much of was Spider-Man: Miles Morales, and while it looked great (more impressions below), it’s incredibly hard to make any substantive comment on the machine’s computing and rendering chops.

Close up of the Sony logo on the PS5 and tiny characters making a pattern.

Image Credits: Devin Coldewey / TechCrunch

The prospect of gaming in 4K and HDR, and of advanced techniques like ray tracing changing how games look, is an exciting one. But in the first place you need a TV setup that’s capable of taking advantage of these features, and in the second — to be perfectly honest, they’re not all they’re cracked up to be. A high-quality 1080p TV from the last couple years will look very nearly as good despite not supporting Dolby Vision or what have you. (I know because I got a new TV during the review period. They both looked great.)

Load times — a factor of the much-lauded custom SSD in this thing — are similarly hard to evaluate, though certainly going from menu to game in Miles Morales was fast, fast-traveling faster, and the previous game was faster to load than on my regular PS4. This benefit will of course vary from game to game, however — some developers are announcing their performance gains publicly, while others with less impressive ones may just let sleeping dogs lie. Without more titles to get a feel for the console’s performance improvements, right now you’ll have to take Sony’s word on things.

The controller: DualSense makes sense

Close up image of a Sony DualSense controller

Image Credits: Devin Coldewey/TechCrunch

One place where Sony is attempting to advance the ball is in the new DualSense controller.

Not in the shape and color and slick, transparent buttons — those are not so hot. It feels like a DualShock that’s let itself go a bit, and I’m definitely not a fan of the “PS” shaped PlayStation button. This thing feels like a grime magnet.

And not in the built-in speaker and microphone, either; I struggle to think of any application for these that wouldn’t be better served by a headset or avoided altogether.

What’s actually a clear and impressive upgrade is the triggers, which feature incredibly precise mechanical resistance that serves all kinds of gameplay functions and sets the imagination running.

A Playstation 5 and 4 controller next to each other.

Image Credits: Devin Coldewey / TechCrunch

The new triggers are connected to a set of gears that impart actual pressure against your fingers, from a very light tap to, presumably (though I haven’t experienced it), actually pushing your fingers back.

The range is wide and it can impart the pressure anywhere along the trigger’s range, giving interesting effects like (the obvious one in violent games) resistance while you pull a gun’s trigger, which then clicks and releases when it fires. In Miles Morales, the triggers act as a very sensitive rumble, but also give you tactile feedback when you’re swinging, telling you when you’ve made contact and so on.

Honestly, I love it. I want to play games that use it well. I don’t want to play games that don’t have it! Hopefully developers will embrace the variable-resistance triggers, because it genuinely adds something to the experience and, if I’m not mistaken, even has the potential to make games more accessible.

The UI: More is more

The PS4’s interface had the illusion of simplicity, and the PS5 continues that with two steps forward and one step back.

For one thing, separating out the “games” and “media” portions of the machine is a smart move. As OTT apps and streaming services proliferate, they take up more and more space and it makes perfect sense to isolate them.

Screenshot of the PS5 menu.

As for the games side, it’s similar to the PS4 in that it’s a horizontal line that you click through, and when a game is highlighted it “takes over” the screen with a background, the latest news, achievements and so on. As before it works perfectly well.

Previously, when you pressed the PlayStation button, you’d return to the main menu and pause whatever you were playing. If you held down the button, it opened an in-game side menu where you could invite friends, turn off the console and other common tasks.

The PS5 reverses that: The long press now returns you to the home screen, while a short press brings up the in-game menu (now a row of tiny icons on the bottom of the screen — not a fan of this change).

The in-game menu now sports an in-depth “card” system that, while cool in theory, seems like one of those things that will not actually be used to great effect. The giant cards show recent screenshots and achievements, friend activity and, if the developer has enabled it, info about your current mission or game progress.

For instance, in Miles Morales, hitting pause told me I was 22% of the way through a side mission to rescue a bodega cat named Spider-Man, with an image of the bodega where I accepted it. Nice, but it’s redundant with the info presented in-game if I pause in the ordinary way. There’s more to it, though — the cards can also be used as “deep links” to game features like multiplayer, quests in progress, quick travel locations, even hints.

Image Credits: Devin Coldewey / TechCrunch

Sony showed off these advanced possibilities in a video of Sackboy: The Big Adventure, but since that game isn’t yet available I can’t yet speak to how well it works. More importantly, I can’t make any promises on behalf of developers, who may or may not integrate the system well. At the very least it could be nice, but I’m afraid it will be relegated to first-party games (of which Sony promises many), and be optional at that.

It’s hard to call the new UI an improvement over the old one — it’s different, in some ways more busy and in some ways streamlined. Where it may improve things is in reducing friction in things like organizing voice chat and joining friends’ games. But that capability wasn’t ready for launch.

A couple nice things I want to note: Setting up the PS5 to your own preferences is super easy. I downloaded my cloud saves in a minute or two, and there’s a great new settings page for things people often change in games: difficulty, language, inverting the camera and some other things. There are also accessibility options built-in: a screen reader, chat transcription and other goodies I wasn’t able to test but am glad to see.

The games: Well… the PS5 is the best PS4 you can buy

The chief reason for buying a new console is to play the new games on it. When the Switch came out, half the reason anyone bought it was to play the fabulous new Zelda. Sadly, the selection at this launch is laughably thin for both Sony and Microsoft fanboys.

As I noted above, the only game I was provided in time to get any real impressions (that I’m permitted to write about) was Spider-Man: Miles Morales. Having recently completed its predecessor on PS4, I can say that the new game looks and plays better, with shorter load times, improved lighting, more detailed buildings and so on. But the 2018 Spider-Man still looks and plays very well — this is the difference you’d expect in a sequel, not from one generation to the next. (To be clear, the PS5 version does look considerably better, it’s just not the night and day we’ve been led to expect.)

A screenshot of Spider-Man: Miles Morales on PS5As far as a review goes, I’ll just say that if you liked the first, you’ll like the second, and if you didn’t play the original, play it first because it’s great. I also want to hand it to the new game for its commitment to diversity.

But that will also be coming out on the PS4… and Xbox One and Series X… In fact, almost all the big games of the next year will be.

They will, of course, play and look better on the PS5 than the PS4. But it’s a hard sell to tell someone to pay $500 so they can play the next Assassin’s Creed or Horizon: Zero Dawn in 4K HDR rather than 1080p.

Meanwhile, the few games you can only play on PS5 are for niche players. Sackboy looks to be a fun platformer but hardly a blockbuster; Demon’s Souls is my most anticipated title of the season, but a remake of a legendary but little-played and controller-bitingly difficult PS3 game isn’t going to break sales records; and Destruction All-Stars, an online-only racing battle royale game, got delayed until February, which suggests it’s not playing well.

Adding them all up there really isn’t much reason in terms of exclusives to pick the PS5 over the Xbox Series X or, at least for 2021, a PS4 Pro.

The good news is that the PS5 is now without question the best way to play the huge catalog of amazing PS4 games out there. Nearly all of them will look better, play better and load faster. Sony as much as admitted this when they bundled a dozen of the best games from the last generation with the PS5. Honestly, I’m looking forward to finally playing God of War (I know… don’t hassle me!) on this thing more than I am Assassin’s Creed: Valhalla.

[gallery ids="2070471,2070472"]

Unfortunately I can’t speak to whether these PS4 games have much to speak of in terms of real improvements yet. As mentioned above, a lot of that depends on support from the developers. But as a simple test, loading the Central Yharnam area in Bloodborne took about 33 seconds on the PS4, and 16 on the PS5 (as you can see in the shots above, the game looks identical). I didn’t time them, but anecdotally other games showed improvements as well.

The verdict: The must-have console for the 2021 holidays

A PS5 console

Image Credits: Devin Coldewey / TechCrunch

No, that isn’t a typo. The PS5 (and I am joined in this opinion by our review of its rival, the Xbox Series X) simply isn’t a console anyone should rush out and purchase for any reason. Not least of which because it will be near-impossible to get one in the next month or so, making the possibility of unwrapping a PS5 a remote one for eager youths.

The power of the next generation is not much on display in any of the titles I have been able to play, and while a handful of upcoming games may show off its advantages, those games will likely play just as well on the other platforms they’re being released on.

Nor are there any compelling new features that make the PS5 feel truly next-gen, with the possible exception of the variable resistance triggers (the Series X has multi-game suspension at least, and I’d be jealous if there were any games to switch between). For the next 6-8 months, the PS5 will merely be the best way to play the same games everyone else is playing, or has been playing for years, but in 4K. That’s it!

The rush by Sony and Microsoft to get these consoles out by the holidays this year simply didn’t have the support of the publishers and developers that make the games that make consoles worth having. That will change late next year as the actual next-gen titles and meaningful exclusives start to appear. And a year from now the PS5 and Series X will truly be must-haves, because there will be things that are only available for them.

I’m not saying buy your kid a PS4 Pro for Christmas. And I’m not saying the PS5 isn’t a great way to play games. I’m just saying that outside some slight differences that many gamers don’t even have the setup to notice, there’s no reason to run out and buy a PS5 right now. Relax and enjoy the latest, greatest games on your old PS4 in confidence, knowing that you’ll save $50 when a Cyberpunk 2077 bundle goes on sale in the summer.

So don’t feel bad if you can’t lay your hands on a PS5 to keep you entertained this winter — a PS4 will do you just fine for the present while the next generation makes its lazy way toward the consoles it will eventually grace.

Amazon to invest $2.8 billion to build its second data center region in India

By Manish Singh

Amazon will invest about $2.8 billion in Telangana to set up a new AWS Cloud region in the southern state of India, a top Indian politician announced on Friday.

The investment will allow Amazon to launch an AWS Cloud region in Hyderabad by mid-2022, said K. T. Rama Rao, Minister for Information Technology, Electronics & Communications, Municipal Administration and Urban Development and Industries & Commerce Departments, Government of Telangana.

The new AWS Asia Region will be Amazon’s second infrastructure region in India, Amazon said ina press release. It did not disclose the size of the investment.

“The new AWS Asia Pacific (Hyderabad) Region will enable even more developers, startups, and enterprises as well as government, education, and non-profit organizations to run their applications and serve end users from data centers located in India,” the e-commerce giant said.

But there is a lot in it for Amazon as well. Jayanth Kolla, chief analyst at consultancy firm Convergence Catalyst, told TechCrunch that by having more cloud regions in India, it will be easier for Amazon to comply with the nation’s data localization policy. This compliance will also help Amazon, which currently leads the cloud market in India, attract more customers.

AWS has courted several high-profile businesses as customers in recent years. Some of these include automobile giant Ashok Leyland, life insurance firm Aditya Birla Capital, edtech giant Byju’s, Axis Bank, Bajaj Capital, ClearTax, Dream11, Edelweiss, Freshworks, HDFC Life, Mahindra Electric, Ola, Oyo, Policybazaar, RBL Bank, redBus, Sharda University, Swiggy, Tata Sky and Zerodha.

Kolla said there is a possibility that in the future several more states in India will introduce their own versions of data localization laws. “This is also a big win for the state government of Telangana, home of the high tech city Hyderabad, for attracting this level of investment,” he added. This is the largest foreign direct investment in Telangana, a state that was formed in 2014, said Rama Rao.

“Businesses in India are embracing cloud computing to reduce costs, increase agility, and enable rapid innovation to meet the needs of billions of customers in India and abroad,” said Peter DeSantis, senior vice president of Global Infrastructure and Customer Support, Amazon Web Services, in a statement. “Together with our AWS Asia Pacific (Mumbai) Region, we’re providing customers with more flexibility and choice, while allowing them to architect their infrastructure for even greater fault tolerance, resiliency, and availability across geographic locations.”

The investment illustrates the opportunities Amazon, which has poured more than $6.5 billion into its India operations to date, sees in the world’s second largest internet market.

Amazon, Google and Microsoft have explored various ways to expand the reach of their cloud services in India. Microsoft inked a long-term deal with telecom giant Jio Platforms last year to offer millions of businesses access to Office 365 and other Microsoft services at a more affordable cost. Earlier this year, Amazon formed a strategic alliance with Airtel, one of the largest telecom operators in India. As part of the deal, Airtel will sell AWS to many of its customers. Microsoft today has three data center regions in India, while Google has two.

At stake is India’s public cloud market, which, according to market research group IDC, is expected to be worth $7 billion by 2024.

Kite adds support for 11 new languages to its AI code completion tool

By Frederic Lardinois

When Kite, the well-funded AI-driven code completion tool, launched in 2019, its technology looked very impressive, but it only supported Python at the time. Earlier this year, it also added JavaScript and today, it is launching support for 11 new languages at once.

The new languages are Java, Kotlin, Scala, C/C++, Objective C, C#, Go, Typescript, HTML/CSS and Less. Kite works in most popular development environments, including the likes of VS Code, JupyterLab, Vim, Sublime and Atom, as well as all Jetbrains IntelliJ-based IDEs, including Android Studio.

This will make Kite a far more attractive solution for a lot of developers. Currently, the company says, it saves its most active developers from writing about 175 “words” of code every day. One thing that always made Kite stand out is that it ranks its suggestions by relevance — not alphabetically as some of its non-AI driven competitors do. To build its models, Kite fed its algorithms code from GitHub .

The service is available as a free download for Windows users and as a server-powered paid enterprise version with a larger deep learning model that consequently offers more AI smarts, as well as the ability to create custom models. The paid version also includes support for multi-line code completion, while the free version only supports line-of-code completions.

Kite notes that in addition to adding new languages, Kite also spent the last year focusing on the user experience, which should now be less distracting and, of course, offer more relevant completions.

Image Credits: Kite

Microsoft Azure announces its first region in Austria

By Frederic Lardinois

Microsoft today announced its plans to launch a new data center region in Austria, its first in the country. With nearby Azure regions in Switzerland, Germany, France and a planned new region in northern Italy, this part of Europe now has its fair share of Azure coverage. Microsoft also noted that it plans to launch a new ‘Center of Digital Excellence’ to Austria to “to modernize Austria’s IT infrastructure, public governmental services and industry innovation.”

In total, Azure now features 65 cloud regions — though that number includes some that aren’t online yet. As its competitors like to point out, not all of them feature multiple availability zones yet, but the company plans to change that. Until then, the fact that there’s usually another nearby region can often make up for that.

Image Credits: Microsoft

Talking about availability zones, in addition to announcing this new data center region, Microsoft also today announced plans to expand its cloud in Brazil, with new availability zones to enable high-availability workloads launching in the existing Brazil South region in 2021. Currently, this region only supports Azure workloads but will add support for Microsoft 365, Dynamics 365 and Power Platform over the course of the next few months.

This announcement is part of a large commitment to building out its presence in Brazil. Microsoft is also partnering with the Ministry of Economy “to help job matching for up to 25 million workers and is offering free digital skilling with the capacity to train up to 5.5 million people” and to use its AI to protect the rainforest. That last part may sound a bit naive, but the specific plan here is to use AI to predict likely deforestation zones based on data from satellite images.

Stotles secures funding for platform which brings transparency to government tenders, contracts

By Mike Butcher

The public sector usually publishes its business opportunities in the form of ‘tenders,’ to increase transparency to the public. However, this data is scattered, and larger businesses have access to more information, giving them opportunities to grab contracts before official tenders are released. We have seen the controversy around UK government contracts going to a number of private consultants who have questionable prior experience in the issues they are winning contracts on.

And public-to-private sector business makes up 14% of global GDP, and even a 1% improvement could save €20B for taxpayers per year, according to the European Commission .

Stotles is a new UK startup technology that turns fragmented public sector data — such as spending, tenders, contracts, meeting minutes, or news releases — into a clearer view of the market, and extracts relevant early signals about potential opportunities.

It’s now raised a £1.4m seed round led by Speedinvest, with participation from 7Percent Ventures, FJLabs, and high-profile angels including Matt Robinson, co-founder of GoCardless and CEO at Nested; Carlos Gonzalez-Cadenas, COO at Go -Cardless; Charlie Songhurst, former Head of Corporate Strategy at Microsoft; Will Neale, founder of Grabyo; and Akhil Paul. It received a previous investment from Seedcamp last year.

Stotles’ founders say they had “scathing” experiences dealing with public procurement in their previous roles at organizations like Boston Consulting Group and the World Economic Forum.

The private beta has been open for nine months, and is used by companies including UiPath, Freshworks, Rackspace, and Couchbase. With this funding announcement, they’ll be opening up an early access program.

Competitors include: Global Data, Contracts Advance, BIP Solutions, Spend Network/Open Opps, Tussel, TenderLake. However, most of the players out there are focused on tracking cold tenders, or providing contracting data for periodic generic market research.

Microsoft debuts Azure Space to cater to the space industry, partners with SpaceX for Starlink datacenter broadband

By Darrell Etherington

Microsoft is taking its Azure cloud computing platform to the final frontier – space. It now has a dedicated business unit called Azure Space for that purpose, made up of industry heavyweights and engineers who are focused on space-sector services including simulation of space missions, gathering and interpreting satellite data to provide insights, and providing global satellite networking capabilities through new and expanded partnerships.

One of Microsoft’s new partners for Azure Space is SpaceX, the progenitor and major current player in the so-called ‘New Space’ industry. SpaceX will be providing Microsoft with access to its Starlink low-latency satellite based broadband network for Microsoft’s new Azure Modular Datacenter (MDC) – essentially an on-demand container-based datacenter unit that can be deployed in remote locations, either to operate on their own or boost local cababilities.

Image Credits: Microsoft

The MDC is a contained unit, and can operate off-grid using its own satellite network connectivity add-on. It’s similar in concept to the company’s work on underwater data centres, but keeping it on the ground obviously opens up more opportunities in terms of locating it where people need it, rather than having to be proximate to an ocean or sea.

The other big part of this announcement focuses on space preparedness via simulation. Microsoft revealed the Azure Orbital Emulator today, which provides in a computer emulated environment the ability to test satellites constellation operations in simulation, using both software and hardware. It’s basically aiming to provide as close to in-space conditions as are possible on the ground in order to get everything ready for coordinating large, interconnected constellations of automated satellites in low Earth orbit, an increasing need as more defense agencies and private companies pursue this approach vs. the legacy method of relying on one, two or just a few large geosynchronous spacecraft.

Image Credits: Microsoft

Microsoft says the goal with the Orbital Emulator is to train AI for use on orbital spacecraft before those spacecraft are actually launched – from the early development phase, right up to working with production hardware on the ground before it takes its trip to space. That’s definitely a big potential competitive advantage, because it should help companies spot even more potential problems early on while they’re still relatively easy to fix (not the case on orbit).

This emulated environment for on-orbit mission prep is already in use by Azure Government customers, the company notes. It’s also looking for more partners across government and industry for space-related services, including communication, national security., satellite services including observation and telemetry and more.

Adobe Lightroom gets a new color grading tool, auto versions, graphical watermarking and more

By Frederic Lardinois

At its MAX conference, Adobe today announced the launch of the latest version of Lightroom, its popular photo management and editing tool. The highlights of today’s release are the introduction of a new color grading tool that’s more akin to what you’d find in a video editor like Adobe Premiere or DaVinci Resolve, auto versioning that’s saved in the cloud (and hence not available in Lightroom Classic) and graphical watermarks, in addition to a number of other small feature updates across the application.

Adobe had already teased the launch of the new color grading feature last month, which was probably a good idea given how much of a change this is for photographers who have used Lightroom before. Adjusting color is, after all, one of the main features of Lightroom and this is a major change.

Image Credits: Adobe

At its core, the new color wheels replace the existing ‘split toning’ controls in Lightroom.

“Color Grading is an extension of Split Toning — it can do everything Split Toning did, plus much more,” Adobe’s Max Wendt explains in today’s announcement. “Your existing images with Split Toning settings will look exactly the same as they did before, your old Split Toning presets will also still look the same when you apply them, and you can still get the same results if you had a familiar starting point when doing Split Toning manually.”

My guess is that it’ll take a while for many Lightroom users to get a hang of these new color wheels. Overall, though, I think this new system is more intuitive than the current split toning feature that a lot of users regularly ignored.

The new color grading feature will be available across platforms and in Lightroom Classic, as well as Camera Raw.

The other new feature Adobe is highlighting with this release is graphical watermarks (available on Windows, Mac, iOS, iPadOS, Android and Chrome OS), that augments the existing text-based watermarking in Lightroom. This does exactly what the name implies and the watermarks are automatically applied when you share or export and image.

Image Credits: Adobe

The most important overall quality of life feature the team is adding is auto versions (also available on Windows, Mac, iOS, iPadOS, Android and Chrome OS). This makes it far easier to save different versions of an image — and these versions are synced across platforms. That way, you can easily go back and forth between different edits and revert those as necessary, too.

Image Credits: Adobe

With its new ‘best photos’ feature, Adobe is now also using its Ai smarts to find the best photos you’ve taken, but only on iOS, iPadOS, and Android, Chrome OS and the web. It’ll look at the technical aspects of your photo, as well as whether your subjects have their eyes open and face forward, for example, and the overall framing of the image. Users can decide how many of their images make the cut by toggling a threshold slider.

Another nifty new feature for Canon shooters who use Lightroom Classic is the addition of a tethered live view for Canon – with support for other cameras coming soon. With this, you get a real-time feed from your camera, making it easier to collaborate with others in real time.

 

Vectary, a design platform for 3D and AR, raises $7.3M from EQT and Blueyard

By Mike Butcher

Vectary, a design platform for 3D and Augmented Reality (AR), has raised a $7.3 million round led by European fund EQT Ventures. Existing investor BlueYard (Berlin) also participated.

Vectary makes high-quality 3D design more accessible for consumers, garnering over one million creators worldwide, and has more than a thousand digital agencies and creative studios as users.

With the coronavirus pandemic shifting more people online, Vectary says it has seen a 300% increase in AR views as more businesses start showcasing their products in 3D and AR.

Vectary was founded in 2014 by Michal Koor (CEO) and Pavol Sovis (CTO), who were both from the design and technology worlds.

The complexity of using and sharing content created by traditional 3D design tools has been a barrier to the adoption of 3D, which is what Vectary addresses.

Although Microsoft, Facebook and Apple are making it easier for consumers, the creative tools remain lacking. Vectary believes that seamless 3D/AR content creation and sharing will be key to mainstream adoption.

Designers and creatives can use Vectary to apply 2D design on a 3D object in Figma or Sketch; create 3D customizers in Webflow with Embed API; and add 3D interactivity to decks.

❌