NVIDIA’S GPU 2017: A.I. – ARTIFICIAL INTELLIGENCE IS HERE AND THE ROBOTS ARE MAKING GOALS…

CEO Jensen Huang of Nvidia addresses the crowd at the GPU2017 conference at the San Jose Convention Center. Photo by Marcus Siu.

Article and photos by Marcus Siu

SAN JOSE, MAY 10, 2017 – The grandiose trailer starts off with a female voice narrator as she introduces herself as a “visionary”, “healer”, “protector”, “helper”, “navigator”, creator”, “teacher”, “learner” and even the composer of the powerful orchestral soundtrack. She is “A.I.”.

From the intro, the crowd knew from the very beginning that CEO and founder Jensen Huang’s keynote at NVIDIA’s GPU 2017 conference was not only going to be very special, but groundbreaking and mind bending with a glimpse of the not to distant future of Artificial Intelligence.

A.I. was the dominating theme this year at the conference. There were 600 talks in which 300 of them were about Artificial Intelligence.

This year, the rise of GPU computing is reflected by the number of GTC attendees. It’s attendance has increased threefold in the last five years, to 7,000 attendees. There was an 11X increase in GPU developers over the same time to more than 500,000 today, and downloads of NVIDIA’s CUDA drivers and SDK’s exceeded one million a year ago. This year, Nvidia has taken GTC globally. All the top 15 technology companies are here, as well as the top 10 car companies.

Huang started off his keynote showing the charts of Moore’s Law over the last 30 years, citing that advanced micro processing, with the assistance of Dennard’s law of scaling, utilizing more transistors with a reduced voltage, allowed microprocessor performance by a million times. But now both are at the end of the roads, as once 50% per year in increased performance is now just 10% per year, which is why they created a domain specific accelerator that complements the CPU, as well as creating CUDA, which many believe is Moore’s Law Squared; a speedup over the natural processor performance.

CEO Jensen Huang of Nvidia shows the virtual showroom for the latest hybrid car with four avatar onlookers gazing at the detail. Photo by Marcus Siu.

The first demo featured Project Holodeck; a photorealistic VR, combining a shared experience for four avatars from around the world, featuring Huang’s “hyper-car-making friend” Christian from Sweden. Christian was showcasing the new company 1.9-million-dollar hybrid car that he just designed, made of carbon fiber with three electric motors, direct drive and no gears with a combustion engine that has 1,200 horsepower, 680 horsepower of electric drive… and that can run from 0 to 250 in 20 seconds. Huang asked Christian to go into the interior of the car and grab the steering wheel. When he obliged, instead of his hands going right through it like we are so accustomed in today’s VR world, Christian’s hands settled right on top of it, obeying the laws of physics.

All the inventory parts are shown in mid air as CEO Jensen Huang of Nvidia looks on. Photo by Marcus Siu.

When Huang asks to see all the parts of the car, we see them each in every piece scattered in mid-air with thousands of parts floating and then reassembling itself back to its original form.

In the era of machine learning, GPU computing has entered a new era where machine learning has come about; Amazon’s “if you like this, you’ll like this” is an example of machine learning. Software writes software and algorithms write algorithms; thanks to deep learning, powerful GPU’s, with the tremendous amounts of data being generated. Unsupervised learning fills the missing parts of the data where computers automate programs.

A demo compared a pair of simulated Mercedes SLK350’s and a landscape with trees and clouds, one which used deep learning for Ray Tracing and one that didn’t. It was night and day and the Ray Tracing just takes seconds for the encoders to render.

Without Deep Learning shows how much noise there is compared to Deep Learning. Photo by Marcus Siu.

Nvidia’s Research Group has deeply committed itself with A.I. There are 1,300 Deep Learning startups that gets advice and technology and Marketing help from Nvidia, and sometimes even funding. Later in the afternoon on the day of the keynote, $1.5 million was handed out to six companies at the Inception Awards. Even at Huang’s alma mater, Stanford University, the most popular course is not Home Economics, but “Intro to Machine Learning”, which is taken from all majors across campus.

Co founders Jensen Huang and Chris Malachowsky congratulates recipients at The Inception Awards in San Jose, California. Photo by Marcus Siu

Nvidia’s biggest news was the announcement of their next level of computer projects, which turned out to cost about $3 billion R&D project, called the Tesla Volta V100. This new architecture was built from the ground up for deep learning, both for training and inferencing.

The company surpassed Microsoft’s ResNet, Baidu’s Deep Speech 2, and even Google’s NMT. It’s made on TSMC’s 12nm FFN, has 5,120 CUDA cores and 120 TeraFLOPs and Tensor Core processors.

Though it’s mainly for A.I., it does unbelievable graphics as shown on the Japanese sci-fi fantasy, “Kingsglaive: Final Fantasy XV” by Square Enix, which was featured as a demo during the keynote. Photographic style transfer also uses deep learning, with deep learning from one style and applying it to another in a matter of seconds.

Huang announced that you’ll be able to find Volta in just about everywhere; in the cloud, data center with a Volta the DGX-1 supercomputer, and in a personal DGX station.

CEO Jensen Huang introduces The Volta. Photo by Marcus Siu

A.I. will soon be revolutionizing transportation. Huang points out that we need to find a way automate so we can keep up with the “Amazon Effect”.

“Everything that moves someday will be augmented by autonomy. We’re enjoying the Amazon Effect. As we buy things, don’t forget we used to go to the store to pick up things but now we expect things to come to us. The number of truck drivers, transportation professionals can’t possibly keep up…”.

“Our environment would be better without parked cars… There are 800 million parking spots in America for only 250 million cars. That’s why we’ve created NVIDIA Drive, a full stack that will deliver full autonomy for cars.”

We now have more than 200 developers using DRIVE PX. This is a 50 percent increase in the past quarter alone. NVIDIA Drive can be used for mapping, as a co-pilot and as your guardian angel, so even when the car isn’t driving you, it should be watching out for you. For mapping and driving, a video that shows how a car makes an HD map and localizes itself within it. This shows how a car scans, detects road features, constructs an HD road map and then locates itself. Huang shows on a video how, even though the test pilot is behind the steering wheel, it cuts to the guardian angel, which tells driver not to proceed because there’s a car coming across the intersecting road running through a red light.

In addition to autonomous cars, Huang mentioned that some of their partners are working on autonomous planes and ships.

Nvidia’s announced Toyota, one of the largest companies in the world, has selected them for its self-driving vehicles. The two engineering teams are working to create their fully autonomous car and put it on the road in the next few years. Toyota chose Nvidia’s Drive PX platform for their self driving cars.

CEO Jensen Huang of Nvidia announces a new partnership with Toyota, who selects Nvidia’s Drive PX for their autonomous cars. Photo by Marcus Siu

In addition, Xavier DLA, which is the module that goes inside the Drive PX will be open sourced, making Nvidia’s best technology available for everyone. “Our goal is proliferation,” Huang stated.

Huang also discussed robots.

“The robot is the ultimate version of artificial intelligence. It’s going to revolutionize a whole slew of new industries from manufacturing to health care. We know that robotic surgery is able to perform surgeries that we simply can’t imagine. In the future, we’re going to have cybernetics. We’re going to have robots that are connected to parts of our body. We’re going to have tiny robots that’s going to take care of various tasks.”

“Robots, unfortunately, are incredibly hard to do… It has to sense the world… it has to learn from it and and plan and take action, but it has to interact with the world. With a self driving car, a specific objective is collision avoidance. In a case of robot collision detection is essential – your goal is to connect, your goal is to collide…how you collide is very difficult to do…”.

The first robot demo showed, Ada, the robot, at the Berkeley A.I. Laboratory who learned how to play hockey. The robot was trained to shoot an orange plastic puck into a small net repeatedly using reinforcement learning. It is learning in the real world.

“Hockey is not too bad… what if we wanted a robot to do open a door, lift a car, or cooperate with a doctor to do surgery?” “There is no way we can have it learn this way repeatedly in the physical world.”

That is why an alternate universe has to obey the law of physics if you choose, be visually photo realistic that looks like the world, and has to have the ability to learn inside this alternative universe. The one gap with the real world is it needs to operate…and we would need it train it at warp speed – it needs to move faster.

Project Issac was created in a virtual environment as a robot simulator for this reason. It’s named after two Isaacs – named for physics pioneer Isaac Newton and Isaac Asimov, the science-fiction author. Isaac has the input of environmental robots, you can put virtual sensors, actuators, effectors of the robot and it’s connected to the OpenAI Gym.

“If you’re going to train robots using A.I., they’re going to have to do a lot of work to do their tasks. Robots can learn effectively in this virtual environment and you can take what they’ve learned… and you can take this robot and put it in the real world.” With Project Isaac, the first alternate reality virtual robot simulator. makes it possible for robots to learn inside this virtual world and will hopefully bring the future of artificial intelligence and robotics to the real world.”

The Isaac Robot simulator where the virtual robots learn before being put into the physical world. Photo by Marcus Siu.

Whether it’s Ada shooting shooting hockey pucks into a net or Isaac getting a sixty foot putt on a golf course. For Nvidia’s latest architecture, it’s a score!!!

Unknown's avatar

About mlsentertainment

Bay Area photojournalist - Northern California, United States Promoting the lively film and music scene mainly through the Bay Area, as well as industry and technology events.
This entry was posted in Technology. Bookmark the permalink.

Leave a comment