With artificial intelligence powering the smart phones in our hands and the drones roaming our skies, NVIDIA CEO Jensen Huang unveiled plans to make all our devices more intelligent.
At the GPU Technology Conference in Tokyo, NVIDIA founder and CEO Jensen Huang announced the new NVIDIA Tesla T4 GPU and TensorRT software to enable intelligent voice, video, image and recommendation services.
The new NVIDIA TensorRT Hyperscale Inference Platform based on NVIDIA’s latest Turing GPU architecture was one of a rapid-fire series of announcements Huang rolled out, targeting the automotive, robotics, and healthcare industries in a two-hour talk that touched on nearly every aspect of rolling out AI on a global scale.
“There is no question that deep learning-powered AI is being deployed around the world, and we’re seeing incredible growth here,” Huang told an audience of more than 4,000 press, partners, academics and technologists gathered on the latest stop in a GTC world tour.
Powering the Next Computing Platform
Inferencing — a market that will grow to $20 billion over the next five years — refers to the task of putting trained neural networks into production.
“The number of applications that are now taking advantage of deep learning are growing exponentially — hyperscale data centers can’t run just one thing — they have to run everything,” Huang said.
NVIDIA’s T4 GPU and TensorRT software promise to process the queries that power such services faster than any competing platform, up to 40x faster than CPUs alone.
To help companies supporting deep learning-powered web services with GPUs, Huang also introduced TensorRT hyperscale — a new feature of TensorRT — that allows a GPU-powered server to run multiple deep learning models and frameworks concurrently.
“As a result, the usefulness and utilization goes way up,” Huang said. “If each node can run any model at the same time, then the utilization of this server will be maximum.”
Huang also announced NVIDIA AGX, a series of embedded AI high-performance computers built around NVIDIA’s new Xavier processors, the world’s first processors built for autonomous machines.
“This is the future brain of autonomous machines,” Huang declared
Huang announced the availability of several developer kits that will let developers quickly put Xavier to work: the Jetson AGX Xavier devkit for autonomous machine such as robots and drones and the DRIVE AGX Xavier devkit for autonomous vehicles.
Huang also announced that Yamaha Motor Co. has selected NVIDIA Jetson AGX Xavier as the development system for its upcoming line of autonomous machines.
“There are so many large industries where automation can boost productivity, where automation can make the tasks safer, can make the tasks more effective and more productive,” Huang said. “I’m very excited and very proud that Yamaha is standardizing on NVIDIA’s AGX architecture.”
Huang also announced commercial vehicle manufacturer Isuzu is collaborating with NVIDIA to build our AI technologies into its autonomous trucks.
Huang gave a shout-out to Fujifilm Corp., the first company in Japan to adopt the NVIDIA DGX-2 AI supercomputer, which it’s using to build a supercomputing cluster to accelerate the development of AI technology for fields such as healthcare and medical imaging systems.
Huang also announced telecom giant NTT Group will adopt NVIDIA’s AI platform for its company-wide “corevo” AI initiative.
Huang unveiled the NVIDIA Clara AGX — a revolutionary computing architecture based on the NVIDIA Xavier AI computing module and NVIDIA Turing GPUs — and the Clara software development kit for developers to create a wide range of AI-powered applications for processing data from medical devices.
All this news comes as top global automakers — such as Toyota — and key automotive components makers continue to wrap their ongoing autonomous vehicle development efforts around NVIDIA technologies.
Huang underscored the announcement of the DRIVE AGX Xavier Devkit with a video of a wheeled NVIDIA Isaac-powered robot making use of a suite of advanced capabilities —- such as sensor processing, mapping and location, path and task planning, and perception — navigating NVIDIA’s Silicon Valley headquarters, both in a simulation and in the real world, to bring Huang his lunch.
It’s a demo that shows how closely simulation will be tied to the future of robotics. “Just imagine this tiny robot is just navigating around this really complex environment. There are no road signs — it just needs to figure out where it is and where it needs to go,” Huang said.
With the help of NVIDIA technologies, Huang is determined to help the world’s robots get to where they need to go, wherever they need to go.
To learn more, join us for GTC 2019 Silicon Valley in March.