FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Nvidia and VMware team up to make GPU virtualization easier

By Frederic Lardinois

Nvidia today announced that it has been working with VMware to bring its virtual GPU technology (vGPU) to VMware’s vSphere and VMware Cloud on AWS. The company’s core vGPU technology isn’t new, but it now supports server virtualization to enable enterprises to run their hardware-accelerated AI and data science workloads in environments like VMware’s vSphere, using its new vComputeServer technology.

Traditionally (as far as that’s a thing in AI training), GPU-accelerated workloads tend to run on bare metal servers, which were typically managed separately from the rest of a company’s servers.

“With vComputeServer, IT admins can better streamline management of GPU accelerated virtualized servers while retaining existing workflows and lowering overall operational costs,” Nvidia explains in today’s announcement. This also means that businesses will reap the cost benefits of GPU sharing and aggregation, thanks to the improved utilization this technology promises.

Note that vComputeServer works with VMware Sphere, vCenter and vMotion, as well as VMware Cloud. Indeed, the two companies are using the same vComputeServer technology to also bring accelerated GPU services to VMware Cloud on AWS. This allows enterprises to take their containerized applications and from their own data center to the cloud as needed — and then hook into AWS’s other cloud-based technologies.

2019 08 25 1849

“From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Nvidia founder and CEO Jensen Huang . “Together with VMware, we’re designing the most advanced and highest performing GPU- accelerated hybrid cloud infrastructure to foster innovation across the enterprise.”

Minecraft to get big lighting, shadow and color upgrades through Nvidia ray tracing

By Darrell Etherington

Minecraft is getting a free update that brings much-improved lighting and color to the game’s blocky graphics using real-time ray tracing running on Nvidia GeForce RTX graphics hardware. The new look is a dramatic change in the atmospherics of the game, and manages to be eerily realistic while retaining Minecraft’s pixelated charm.

The ray tracing tech will be available via a free update to the game on Windows 10 PCs, but it’ll only be accessible to players using an Nvidia GeForce RTX GPU, since that’s the only graphics hardware on the market that currently supports playing games with real-time ray tracing active.

It sounds like it’ll be an excellent addition to the experience for players who are equipped with the right hardware, however – including lighting effects not only from the sun, but also from in-game materials like glowstone and lava; both hard and soft shadows depending on transparency of material and angle of light refraction; and accurate reflections in surfaces that are supposed to be reflective (ie. gold blocks, for instance).

This is welcome news after Minecraft developer Mojang announced last week that it cancelled plans to release its Super Duper Graphics Pack, which was going to add a bunch of improved visuals to the game, because it wouldn’t work well across platforms. At the time, Mojang said it would be sharing news about graphics optimization for some platforms “very soon,” and it looks like this is what they had in mind.

Nvidia meanwhile is showing off a range of 2019 games with real-time ray tracing enabled at Gamescom 2019 in Cologne, Germany, including Dying Light 2, Cyperpunk 2077, Call of Duty: Modern Warfare and Watch Dogs: Legion.

[gallery ids="1870333,1870334,1870335"]

The renaissance of silicon will create industry giants

By David Riggs
Navin Chaddha Contributor
Navin Chaddha leads Mayfield. The firm invests in early-stage consumer and enterprise technology companies and currently has $2.7 billion under management.

Every time we binge on Netflix or install a new internet-connected doorbell to our home, we’re adding to a tidal wave of data. In just 10 years, bandwidth consumption has increased 100 fold, and it will only grow as we layer on the demands of artificial intelligence, virtual reality, robotics and self-driving cars. According to Intel, a single robo car will generate 4 terabytes of data in 90 minutes of driving. That’s more than 3 billion times the amount of data people use chatting, watching videos and engaging in other internet pastimes over a similar period.

Tech companies have responded by building massive data centers full of servers. But growth in data consumption is outpacing even the most ambitious infrastructure build outs. The bottom line: We’re not going to meet the increasing demand for data processing by relying on the same technology that got us here.

The key to data processing is, of course, semiconductors, the transistor-filled chips that power today’s computing industry. For the last several decades, engineers have been able to squeeze more and more transistors onto smaller and smaller silicon wafers — an Intel chip today now squeezes more than 1 billion transistors on a millimeter-sized piece of silicon.

This trend is commonly known as Moore’s Law, for the Intel co-founder Gordon Moore and his famous 1965 observation that the number of transistors on a chip doubles every year (later revised to every two years), thereby doubling the speed and capability of computers.

This exponential growth of power on ever-smaller chips has reliably driven our technology for the past 50 years or so. But Moore’s Law is coming to an end, due to an even more immutable law: material physics. It simply isn’t possible to squeeze more transistors onto the tiny silicon wafers that make up today’s processors.

Compounding matters, the general-purpose chip architecture in wide use today, known as x86, which has brought us to this point, isn’t optimized for computing applications that are now becoming popular.

That means we need a new computing architecture. Or, more likely, multiple new computer architectures. In fact, I predict that over the next few years we will see a flowering of new silicon architectures and designs that are built and optimized for specialized functions, including data intensity, the performance needs of artificial intelligence and machine learning and the low-power needs of so-called edge computing devices.

The new architects

We’re already seeing the roots of these newly specialized architectures on several fronts. These include Graphic Processing Units from Nvidia, Field Programmable Gate Arrays from Xilinx and Altera (acquired by Intel), smart network interface cards from Mellanox (acquired by Nvidia) and a new category of programmable processor called a Data Processing Unit (DPU) from Fungible, a startup Mayfield invested in.  DPUs are purpose-built to run all data-intensive workloads (networking, security, storage) and Fungible combines it with a full-stack platform for cloud data centers that works alongside the old workhorse CPU.

These and other purpose-designed silicon will become the engines for one or more workload-specific applications — everything from security to smart doorbells to driverless cars to data centers. And there will be new players in the market to drive these innovations and adoptions. In fact, over the next five years, I believe we’ll see entirely new semiconductor leaders emerge as these services grow and their performance becomes more critical.

Let’s start with the computing powerhouses of our increasingly connected age: data centers.

More and more, storage and computing are being done at the edge; that means, closer to where our devices need them. These include things like the facial recognition software in our doorbells or in-cloud gaming that’s rendered on our VR goggles. Edge computing allows these and other processes to happen within 10 milliseconds or less, which makes them more work for end users.

I commend the entrepreneurs who are putting the silicon back into Silicon Valley.

With the current arithmetic computations of x86 CPU architecture, deploying data services at scale, or at larger volumes, can be a challenge. Driverless cars need massive, data-center-level agility and speed. You don’t want a car buffering when a pedestrian is in the crosswalk. As our workload infrastructure — and the needs of things like driverless cars — becomes ever more data-centric (storing, retrieving and moving large data sets across machines), it requires a new kind of microprocessor.

Another area that requires new processing architectures is artificial intelligence, both in training AI and running inference (the process AI uses to infer things about data, like a smart doorbell recognizing the difference between an in-law and an intruder). Graphic Processing Units (GPUs), which were originally developed to handle gaming, have proven faster and more efficient at AI training and inference than traditional CPUs.

But in order to process AI workloads (both training and inference), for image classification, object detection, facial recognition and driverless cars, we will need specialized AI processors. The math needed to run these algorithms requires vector processing and floating-point computations at dramatically higher performance than general purpose CPUs provide.

Several startups are working on AI-specific chips, including SambaNova, Graphcore and Habana Labs. These companies have built new AI-specific chips for machine intelligence. They lower the cost of accelerating AI applications and dramatically increase performance. Conveniently, they also provide a software platform for use with their hardware. Of course, the big AI players like Google (with its custom Tensor Processing Unit chips) and Amazon (which has created an AI chip for its Echo smart speaker) are also creating their own architectures.

Finally, we have our proliferation of connected gadgets, also known as the Internet of Things (IoT). Many of our personal and home tools (such as thermostats, smoke detectors, toothbrushes and toasters) operate on ultra-low power.

The ARM processor, which is a family of CPUs, will be tasked for these roles. That’s because gadgets do not require computing complexity or a lot of power. The ARM architecture is perfectly designed for them. It’s made to handle smaller number of computing instructions, can operate at higher speeds (churning through many millions of instructions per second) and do it at a fraction of the power required for performing complex instructions. I even predict that ARM-based server microprocessors will finally become a reality in cloud data centers.

So with all the new work being done in silicon, we seem to be finally getting back to our original roots. I commend the entrepreneurs who are putting the silicon back into Silicon Valley. And I predict they will create new semiconductor giants.

UPS takes minority stake in self-driving truck startup TuSimple

By Kirsten Korosec

UPS said Thursday it has taken a minority stake in self-driving truck startup TuSimple just months after the two companies began testing the use of autonomous trucks in Arizona.

The size of minority investment, which was made by the company’s venture arm UPS Ventures, was not disclosed. The investment and the testing comes as UPS looks for new ways to remain competitive, cut costs and boost its bottom line.

TuSimple, which launched in 2015 and has operations in San Diego and Tucson, Arizona, believes it can deliver. The startup says it can cut average purchased transportation costs by 30%.

TuSimple, which is backed by Nvidia, ZP Capital and Sina Corp., is working on a “full-stack solution,” a wonky industry term that means developing and bringing together all of the technological pieces required for autonomous driving. TuSimple is developing a Level 4 system, a designation by the SAE that means the vehicle takes over all of the driving in certain conditions.

An important piece of TuSimple’s approach is its camera-centric perception solution. TuSimple’s camera-based system has a vision range of 1,000 meters, the company says.

The days of when highways will be filled with autonomous trucks are years away. But UPS believes it’s worth jumping in at an early stage to take advantage of some of the automated driving such as advanced braking technology that TuSimple can offer today.

“UPS is committed to developing and deploying technologies that enable us to operate our global logistics network more efficiently,” Scott Price, chief strategy officer at UPS said in a statement. “While fully autonomous, driverless vehicles still have development and regulatory work ahead, we are excited by the advances in braking and other technologies that companies like TuSimple are mastering. All of these technologies offer significant safety and other benefits that will be realized long before the full vision of autonomous vehicles is brought to fruition — and UPS will be there, as a leader implementing these new technologies in our fleet.”

UPS initially tapped TuSimple to help it better understand how Level 4 autonomous trucking might function within its network. That relationship expanded in May when the companies began using self-driving tractor trailers to carry freight on a freight route between Tucson and Phoenix to test if service and efficiency in the UPS network can be improved. This testing is ongoing. All of TuSimple’s self-driving trucks operating in the U.S. have a safety driver and an engineer in the cab.

TuSimple and UPS monitor all aspects of these trips, including safety data, transport time and the distance and time the trucks travel autonomously, the companies said Thursday.

UPS isn’t the only company that TuSimple is hauling freight for as part of its testing. TuSimple has said its hauling loads for for several customers in Arizona.  The startup has a post-money valuation of $1.095 billion (aka unicorn status).

A guide to Virtual Beings and how they impact our world

By Eric Peckham

Money from big tech companies and top VC firms is flowing into the nascent “virtual beings” space. Mixing the opportunities presented by conversational AI, generative adversarial networks, photorealistic graphics, and creative development of fictional characters, “virtual beings” envisions a near-future where characters (with personalities) that look and/or sound exactly like humans are part of our day-to-day interactions.

Last week in San Francisco, entrepreneurs, researchers, and investors convened for the first Virtual Beings Summit, where organizer and Fable Studio CEO Edward Saatchi announced a grant program. Corporates like Amazon, Apple, Google, and Microsoft are pouring resources into conversational AI technology, chip-maker Nvidia and game engines Unreal and Unity are advancing real-time ray tracing for photorealistic graphics, and in my survey of media VCs one of the most common interests was “virtual influencers”.

The term “virtual beings” gets used as a catch-all categorization of activities that overlap here. There are really three separate fields getting conflated though:

  1. Virtual Companions
  2. Humanoid Character Creation
  3. Virtual Influencers

These can overlap — there are humanoid virtual influencers for example — but they represent separate challenges, separate business opportunities, and separate societal concerns. Here’s a look at these fields, including examples from the Virtual Beings Summit, and how they collectively comprise this concept of virtual beings:

Virtual companions

Virtual companions are conversational AI that build a unique 1-to-1 relationship with us, whether to provide friendship or utility. A virtual companion has personality, gauges the personality of the user, retains memory of prior conversations, and uses all that to converse with humans like a fellow human would. They seem to exist as their own being even if we rationally understand they are not.

Virtual companions can exist across 4 formats:

  1. Physical presence (Robotics)
  2. Interactive visual media (social media, gaming, AR/VR)
  3. Text-based messaging
  4. Interactive voice

While pop culture depictions of this include Her and Ex Machina, nascent real-world examples are virtual friend bots like Hugging Face and Replika as well as voice assistants like Amazon’s Alexa and Apple’s Siri. The products currently on the market aren’t yet sophisticated conversationalists or adept at engaging with us as emotional creatures but they may not be far off from that.

Tesla reportedly working on its own battery cell manufacturing capability

By Darrell Etherington

Automaker Tesla is looking into how it might own another key part of its supply chain, through research being done at a secret lab near its Fremont, CA HQ, CNBC reports. The company currently relies on Panasonic to build the battery pack and cells it uses for its vehicles, which is one of, if not the most significant component in terms of its overall bill of materials.

Tesla is no stranger to owning components of its own supply chain rather than farming them out to vendors as is more common among automakers – it builds its own seats at a facility down the road from its Fremont car factory, for instance, and it recently started building its own chip for its autonomous features, taking over those duties from Nvidia.

Eliminating links in the chain where possible is a move emulated from Tesla CEO Elon Musk inspiration Apple, which under Steve Jobs adopted an aggressive strategy of taking control of key parts of its own supply mix and continues to do so where it can eke out improvements to component cost. Musk has repeatedly pointed out that batteries are a primary constraint when it comes to Tesla’s ability to produce not only is cars, but also its home power products like the Powerwall consumer domestic battery for solar energy systems.

Per the CNBC report, Tesla is doing its battery research at an experimental lab near its factory in Fremont, at a property it maintains on Kato road. Tesla would need lots more time and effort to turn its battery ambitions into production at the scale it requires, however, so don’t expect it to replace Panasonic anytime soon. And in fact, it could add LG as a supplier in addition to Panasonic once its Shanghai factory starts producing Model 3s, per the report.

❌