FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Unity launches its Cloud Content Delivery service for game developers

By Frederic Lardinois

Unity, the company behind the popular real-time 3D engine, today officially launched its Cloud Content Delivery service. This new service, which is engine-agnostic, combines a content delivery network and backend-as-a-service platform to help developers distribute and update their games. The idea here is to offer Unity developers — and those using other game engines — a live game service option that helps them get the right content to their players at the right time.

As Unity’s Felix The noted, most game developers currently use a standard CDN provider, but that means they must also develop their own last-mile delivery service in order to be able to make their install and update process more dynamic and configurable. Or, as most gamers can attest, the developers simply opt to ship the game as a large binary and with every update, the user has to download that massive file again.

“That can mean the adoption of your new game content or any content will trail a little bit behind because you are reliant on people doing the updates necessary,” The said.

And while the Cloud Delivery Service can be used across platforms, the team is mostly focusing on mobile for now. “We are big fans of focusing on a certain segment when we start and then we can decide how we want to expand. There is a lot of need in the mobile space right now — more so than the rest,” The said. To account for this, the Cloud Content Delivery service allows developers to specify which binary to send to which device, for example.

Having a CDN is one thing, but that last-mile delivery, as The calls it, is where Unity believes it can solve a real pain point for developers.

“CDNs, you get content. Period,” The said. “But in this case, if you want to, as a game developer, test a build — is this QA ready? Is this something that is still being QAed? The build that you want to assign to be downloaded from our Cloud Content Delivery will be different. You want to soft launch new downloadable content for Canada before you release it in the U.S.? You would use our system to configure that. It’s really purpose-built with video games in mind.”

The team decided to keep pricing simple. All developers pay for is the egress pricing, plus a very small fee for storage. There is no regional pricing either, and the first 50GB of bandwidth usage is free, with Unity charging $0.08 per GB for the next 50TB, with additional pricing tiers for those who use more than 50TB ($0.06/GB) and 500TB ($0.03).

“Our intention is that people will look at it and don’t worry about ‘what does this mean? I need a pricing calculator. I need to simulate what’s it going to cost me,’ but really just focus on the fact that they need to make great content,” The explained.

It’s worth highlighting that the delivery service is engine-agnostic. Unity, of course, would like you to use it for games written with the help of the Unity engine, but it’s not a requirement. The argues that this is part of the company’s overall philosophy.

“Our mission has always been centered around democratizing development and making sure that people — regardless of their choices — will have access to success,” he said. “And in terms of operating your game, the decision of a gaming engine typically has been made well before operating your game ever comes into the picture. […] Developer success is at the heart of what we want to focus on.”

Watch SpaceX launch its 12th Starlink satellite internet mission live

By Darrell Etherington

SpaceX is about to hit an even dozen for its Starlink launches, which carry the company’s own broadband internet satellites to low Earth orbit. This flight carries a full 60-satellite complement of the Starlink spacecraft, after the last couple of these have reserved a little space for client payloads. The launch is set to take off at 8:46 AM EDT (5:46 AM PDT) from Kennedy Space Center in Florida, and there’s a backup opportunity tomorrow morning should it need to be scrubbed for any reason.

This mission will use a Falcon 9 booster that has flown once previously, just a few months ago in June for a mission that delivered a GPS III satellite on behalf of the U.S. Space Force. SpaceX will also try to recover the booster with a landing at sea on its ‘Of Course I Still Love You’ drone landing ship.

Starlink has been by far the most frequent launch focus for SpaceX this year, as the company ramps the size of its active constellation in preparation for the deployment of its service in the U.S. According to some internet speed testing websites, the service is already being used by some individuals, and a leak from SpaceX’s dedicated Starlink website indicates a broader public beta test is imminent. The company has said service should be available in parts of the U.S. and Canada by later this year, with a planned expansion to follow in 2021.

The webcast above should go live about 15 minutes prior to the liftoff time, so at around 8:31 AM EDT (5:31 AM PDT).

Microsoft launches a deepfake detector tool ahead of US election

By Natasha Lomas

Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.

The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

If a piece of online content looks real but ‘smells’ wrong chances are it’s a high tech manipulation trying to pass as real — perhaps with a malicious intent to misinform people.

And while plenty of deepfakes are created with a very different intent — to be funny or entertaining — taken out of context such synthetic media can still take on a life of its own as it spreads, meaning it can also end up tricking unsuspecting viewers.

While AI tech is used to generate realistic deepfakes, identifying visual disinformation using technology is still a hard problem — and a critically thinking mind remains the best tool for spotting high tech BS.

Nonetheless, technologists continue to work on deepfake spotters — including this latest offering from Microsoft.

Although its blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”

This summer a competition kicked off by Facebook to develop a deepfake detector served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.

Microsoft, meanwhile, says its Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, which it notes are “both leading models for training and testing deepfake detection technologies”.

It’s partnering with the San Francisco-based AI Foundation to make the tool available to organizations involved in the democratic process this year — including news outlets and political campaigns.

“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.

The tool has been developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee — as part of a wider program Microsoft is running aimed at defending democracy from threats posed by disinformation.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

On the latter front, Microsoft has also announced a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online — providing a reference point for authenticity.

The second component of the system is a reader tool, which can be deployed as a browser extension, for checking certificates and matching the hashes to offer the viewer what Microsoft calls “a high degree of accuracy” that a particular piece of content is authentic/hasn’t been changed.

The certification will also provide the viewer with details about who produced the media.

Microsoft is hoping this digital watermarking authenticity system will end up underpinning a Trusted News Initiative announced last year by UK publicly funded broadcaster, the BBC — specifically for a verification component, called Project Origin, which is led by a coalition of the BBC, CBC/Radio-Canada, Microsoft and The New York Times.

It says the digital watermarking tech will be tested by Project Origin with the aim of developing it into a standard that can be adopted broadly.

“The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies,” Microsoft adds.

While work on technologies to identify deepfakes continues, its blog post also emphasizes the importance of media literacy — flagging a partnership with the University of Washington, Sensity and USA Today aimed at boosting critical thinking ahead of the US election.

This partnership has launched a Spot the Deepfake Quiz for voters in the US to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”, as it puts it.

The interactive quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising, per the blog post.

The tech giant also notes that it’s supporting a public service announcement (PSA) campaign in the US encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming election.

“The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October,” it adds.

Technologists: Consider Canada

By Walter Thompson
Tim Bray Contributor
Tim Bray is a software technologist based in Vancouver, B.C. and a former vice president and Distinguished Engineer at Amazon Web Services.
Iain Klugman Contributor
Iain Klugman is CEO of Communitech, an innovation hub in Waterloo, Ontario, and a signatory to the Tech for Good Declaration.

America’s technology industry, radiating brilliance and profitability from its Silicon Valley home base, was until recently a shining beacon of what made America great: Science, progress, entrepreneurship. But public opinion has swung against big tech amazingly fast and far; negative views doubled between 2015 and 2019 from 17% to 34%. The list of concerns is long and includes privacy, treatment of workers, marketplace fairness, the carnage among ad-supported publications and the poisoning of public discourse.

But there’s one big issue behind all of these: An industry ravenous for growth, profit and power, that has failed at treating its employees, its customers and the inhabitants of society at large as human beings. Bear in mind that products, companies and ecosystems are built by people, for people. They reflect the values of the society around them, and right now, America’s values are in a troubled state.

We both have a lot of respect and affection for the United States, birthplace of the microprocessor and the electric guitar. We could have pursued our tech careers there, but we’ve declined repeated invitations and chosen to stay at home here in Canada . If you want to build technology to be harnessed for equity, diversity and social advancement of the many, rather than freedom and inclusion for the few, we think Canada is a good place to do it.

U.S. big tech is correctly seen as having too much money, too much power and too little accountability. Those at the top clearly see the best effects of their innovations, but rarely the social costs. They make great things — but they also disrupt lives, invade privacy and abuse their platforms.

We both came of age at a time when tech aspired to something better, and so did some of today’s tech giants. Four big tech CEOs recently testified in front of Congress. They were grilled about alleged antitrust abuses, although many of us watching were thinking about other ills associated with some of these companies: tax avoidance, privacy breaches, data mining, surveillance, censorship, the spread of false news, toxic byproducts, disregard for employee welfare.

But the industry’s problem isn’t really the products themselves — or the people who build them. Tech workers tend to be dramatically more progressive than the companies they work for, as Facebook staff showed in their recent walkout over President Donald Trump’s posts.

Big tech’s problem is that it amplifies the issues Americans are struggling with more broadly. That includes economic polarization, which is echoed in big-tech financial statements, and the race politics that prevent tech (among other industries) from being more inclusive to minorities and talented immigrants.

We’re particularly struck by the Trump administration’s recent moves to deny opportunities to H-1B visa holders. Coming after several years of family separations, visa bans and anti-immigrant rhetoric, it seems almost calculated to send IT experts, engineers, programmers, researchers, doctors, entrepreneurs and future leaders from around the world — the kind of talented newcomers who built America’s current prosperity — fleeing to more receptive shores.

One of those shores is Canada’s; that’s where we live and work. Our country has long courted immigration, but it’s turned around its longstanding brain-drain problem in recent years with policies designed to scoop up talented people who feel uncomfortable or unwanted in America. We have an immigration program, the Global Talent Stream, that helps innovative companies fast-track foreign workers with specialized skills. Cities like Toronto, Montreal, Waterloo and Vancouver have been leading North America in tech job creation during the Trump years, fuelled by outposts of the big international tech companies but also by scaled-up domestic firms that do things the Canadian way, such as enterprise software developer OpenText (one of us is a co-founder) and e-commerce giant Shopify.

“Canada is awesome. Give it a try,” Shopify CEO Tobi Lütke told disaffected U.S. tech workers on Twitter recently.

But it’s not just about policy; it’s about underlying values. Canada is exceptionally comfortable with diversity, in theory (as expressed in immigration policy) and practice (just walk down a street in Vancouver or Toronto). We’re not perfect, but we have been competently led and reasonably successful in recognizing the issues we need to deal with. And our social contract is more cooperative and inclusive.

Yes, that means public health care with no copays, but it also means more emphasis on sustainability, corporate responsibility and a more collaborative strain of capitalism. Our federal and provincial governments have mostly been applauded for their gusher of stimulative wage subsidies and grants meant to sustain small businesses and tech talent during the pandemic, whereas Washington’s response now appears to have been formulated in part to funnel public money to elites.

American big tech today feels morally adrift, which leads to losing out on talented people who want to live the values Silicon Valley used to stand for — not just wealth, freedom and the few, but inclusivity, diversity and the many. Canada is just one alternative to the U.S. model, but it’s the alternative we know best and the one just across the border, with loads of technology job openings.

It wouldn’t surprise us if more tech refugees find themselves voting with their feet.

US tech needs a pivot to survive

By Walter Thompson
James Stranko Contributor
James Stranko is a writer and independent advisor to American tech companies expanding abroad. He was on the founding team of Fuel, McKinsey’s practice serving VC firms and pre-IPO tech leaders.
Daire Hickey Contributor
Daire Hickey is managing partner of 150Bond, a strategic advisory firm based between New York and Dublin, and co-founder of Web Summit.

Last month, American tech companies were dealt two of the most consequential legal decisions they have ever faced. Both of these decisions came from thousands of miles away, in Europe. While companies are spending time and money scrambling to understand how to comply with a single decision, they shouldn’t miss the broader ramification: Europe has different operating principles from the U.S., and is no longer passively accepting American rules of engagement on tech.

In the first decision, Apple objected to and was spared a $15 billion tax bill the EU said was due to Ireland, while the European Commission’s most vocal anti-tech crusader Margrethe Vestager was dealt a stinging defeat. In the second, and much more far-reaching decision, Europe’s courts struck a blow at a central tenet of American tech’s business model: data storage and flows.

American companies have spent decades bundling stores of user data and convincing investors of its worth as an asset. In Schrems, Europe’s highest court ruled that masses of free-flowing user data is, instead, an enormous liability, and sows doubt about the future of the main method that companies use to transfer data across the Atlantic.

On the surface, this decision appears to be about data protection. But there is a choppier undertow of sentiment swirling in legislative and regulatory circles across Europe. Namely that American companies have amassed significant fortunes from Europeans and their data, and governments want their share of the revenue.

What’s more, the fact that European courts handed victory to an individual citizen while also handing defeat to one of the commission’s senior leaders shows European institutions are even more interested in protecting individual rights than they are in propping up commission positions. This particular dynamic bodes poorly for the lobbying and influence strategies that many American companies have pursued in their European expansion.

After the Schrems ruling, companies will scramble to build legal teams and data centers that can comply with the court’s decision. They will spend large sums of money on pre-built solutions or cloud providers that can deliver a quick and seamless transition to the new legal reality. What companies should be doing, however, is building a comprehensive understanding of the political, judicial and social realities of the European countries where they do business — because this is just the tip of the iceberg.

American companies need to show Europeans — regularly and seriously — that they do not take their business for granted.

Europe is an afterthought no more

For many years, American tech companies have treated Europe as a market that required minimal, if any, meaningful adaptations for success. If an early-stage company wanted to gain market share in Germany, it would translate its website, add a notice about cookies and find a convenient way to transact in euros. Larger companies wouldn’t add many more layers of complexity to this strategy; perhaps it would establish a local sales office with a European from HQ, hire a German with experience in U.S. companies or sign a local partnership that could help it distribute or deliver its product. Europe, for many small and medium-sized tech firms, was little more than a bigger Canada in a tougher time zone.

Only the largest companies would go to the effort of setting up public policy offices in Brussels, or meaningfully try to understand the noncommercial issues that could affect their license to operate in Europe. The Schrems ruling shows how this strategy isn’t feasible anymore.

American tech must invest in understanding European political realities the same way they do in emerging markets like India, Russia or China, where U.S. tech companies go to great lengths to adapt products to local laws or pull out where they cannot comply. Europe is not just the European Commission, but rather 27 different countries that vote and act on different interests at home and in Brussels.

Governments in Beijing or Moscow refused to accept a reality of U.S. companies setting conditions for them from the outset. After underestimating Europe for years, American companies now need to dedicate headspace to considering how business is materially affected by Europe’s different views on data protection, commerce, taxation and other issues.

This is not to say that American and European values on the internet differ as dramatically as they do with China’s values, for instance. But Europe, from national governments to the EU and to courts, is making it clear that it will not accept a reality where U.S. companies assume that they have license to operate the same way they do at home. Where U.S. companies expect light taxation, European governments expect revenue for economic activity. Where U.S. companies expect a clear line between state and federal legislation, Europe offers a messy patchwork of national and international regulation. Where U.S. companies expect that their popularity alone is proof that consumers consent to looser privacy or data protection, Europe reminds them that (across the pond) the state has the last word on the matter.

Many American tech companies understand their commercial risks inside and out but are not prepared for managing the risks that are out of their control. From reputation risk to regulatory risk, they can no longer treat Europe as a like-for-like market with the U.S., and the winners will be those companies that can navigate the legal and political changes afoot. Having a Brussels strategy isn’t enough. Instead American companies will need to build deeper influence in the member states where they operate. Specifically, they will need to communicate their side of the argument early and often to a wider range of potential allies, from local and national governments in markets where they operate, to civil society activists like Max Schrems .

The world’s offline differences are obvious, and the time when we could pretend that the internet erased them rather than magnified them is quickly ending.

First US apps based on Google and Apple Exposure Notification System expected in ‘coming weeks’

By Darrell Etherington

Google Vice President of Engineering Dave Burke provided an update about the Exposure Notifications System (ENS) that Google developed in partnership with Apple, as a way to help public health authorities supplement contact tracing efforts with a connected solution that preserves privacy while alerting people of potential exposure to confirmed cases of COVID-19. In the update, Burke notes that the company expects “to see the first et of these apps roll out in the coming weeks” in the U.S., which may be a tacit response to some critics who have pointed out that we haven’t seen much in the way of actual products being built on the technology that was launched in May.

Burke writes that 20 states and territories across the U.S. are currently “exploring” apps that make use of the ENS system, and that together those represent nearly half (45%) of the overall American populace. He also shared recent updates and improvements made to both the Exposure Notification API, as well as to its surrounding documentation and information that the companies have shared in order to answer questions state health agencies have had, and hopefully make its use and privacy implications more transparent.

The ENS API now supports exposure notifications between countries, which Burke says is a feature added based on nations that have already launched apps based on the tech (that includes Canada, as of today, as well as some European nations). It’s also now better at using Bluetooth values specific to a wider range of devices to improve nearby device detection accuracy. He also says that they’ve improved the reliability for both apps and debugging tools for those working on development, which should help public health authorities and their developer partners more easily build apps that actually use ENS.

Burke continues that there’s been feedback from developers that they’d like more detail about how ENS works under the covers, and so they’ve published public-facing guides that direct health authorities about test verification server creation, code revealing its underlying workings, and information about what data is actually collected (in a de-identified manner) to allow for much more transparent debugging and verification of proper app functioning.

Google also explains why it requires that an Android device’s location setting be turned on to use Exposure Notifications – even though apps built using the API are explicitly forbidden from also collecting location data. Basically, it’s a legacy requirement that Google is removing in Android 11, which is set to be released soon. In the meantime, however, Burke says that even with location services turned off, no app that uses the ENS will actually be able to see or receive any location data.

❌