Nvidia's Latest Breakthrough: The Future of nvidia's new AI is Here

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in 10 years time we increased the throughput we increased the scale the overall throughput across all of that stack by 1 million x 1 million x in 10 years well just now in the beginning I showed you computer graphics in five years we improved computer Graphics by 1 000 times in five years using artificial intelligence and accelerated Computing using accelerated Computing and artificial intelligence we accelerated computer Graphics by one thousand times in five years Moore's Law is probably currently running at about two times a thousand times in five years a thousand times in five years is one million times in ten we're doing the same thing in artificial intelligence Ray tracing simulating the characteristics of light and materials is the ultimate accelerated Computing challenge six years ago we demonstrated for the very first time rendering this scene in less than few hours after a decade of research we were able to render this scene in seconds 15 seconds on our highest end GPU six years ago and then we invented Nvidia RTX and combined three fundamental Technologies Hardware accelerated Ray tracing artificial intelligence processing on Nvidia tensor core gpus and brand new algorithms let's take a look at the difference in just five years roll it this is running on Cuda gpus six years ago rendering this beautiful image that would have otherwise taken a couple of hours on a CPU so this was a giant breakthrough already enormous speed up running on accelerated Computing and then we invented the RTX GPU run it please [Music] thank you the Holy Grail of computer Graphics Ray tracing is now possible in real time this is the technology we have put into RTX and this after five years is a very important time for us because for the very first time we took our third generation Ada architecture RTX gpus and brought it to the mainstream with two new products that are now completely in production are you I got that backwards everything looks different inside out and upside down okay this is our brand new right here you're looking at an ADA GPU running Ray tracing and artificial intelligence at 60 frames a second it's 14 inch it weighs almost nothing it's more powerful than the highest end PlayStation and this is the RTX 4060 TI for our core Gamers both of these are now in production our partners here in Taiwan are producing both of these products in very very large Productions and I'm really excited about them thank you very much I can almost put this in my pocket AI made it possible for us to do that everything that you saw would have been utterly impossible without AI for every single Pixel we render we use AI to predict sir developing and deploying software that is incredible that can be deployed in devices all over the world we used AI to render this scene we're going to also use AI to bring it alive today we're announcing Nvidia Ace Avatar Cloud engine that is designed for animating to bringing a digital Avatar to life it has several characteristics several capabilities speech recognition text-to-speech natural language understanding basically a large language model and using the sound that you will be generating with your voice animate the face and using the sound and the expression that you're saying animate your gestures all of this is completely trained by AI we have a service that includes pre-trained models that you can come developers can come and modify and enhance for your own application for your own story because every game has a different story and then you can deploy it in the cloud or deploy it on your device has a great back end has a tensor RT a tensor RT is a videos Deep learning optimizing compiler and you could deploy it on Nvidia gpus as well as output Onyx and Industry standard back in so that you can run it on any device let's take a look at this scene in just a second but let me first tell you about it it is completely rendered with Ray tracing notice the beautiful lights so many different lights and all of the different lights are projecting light from that source so you have all kinds of direct lights you have Global illumination you're going to see incredibly beautiful shadows and physics simulation and notice the character the beautiful rendering of the character everything is is done in Unreal Engine 5. we partnered with a avatar framework an avatar tool maker called com AI and together we developed this demo you're about to see okay Run please everything is real time hey Jen how are you unfortunately not so good how come I'm worried about the crime around here it's gotten bad lately my Ramen Shop got caught in the crossfire can I help if you want to do something about this I have heard rumors that the powerful crime Lord Kumon Aoki is causing all sorts of Chaos in the city he may be the root of this violence I'll talk to him where can I find him I have heard he hangs out in the underground fight clubs on the city's East Side try there okay I'll go be careful guy none of that conversation was scripted we gave that AI this gen AI character a backstory his story about his Ramen Shop and the story of this of this game and all you have to do is go up and talk to this character and because this character has been infused with artificial intelligence and large language models it can interact with you understand your meaning and interact with you in a really reasonable way all of the facial animation completely done by the AI we have made it possible for all kinds of characters to be generated they're all domain they have their own domain knowledge you can customize it so everybody's game's different and look how wonderfully beautiful they are and natural they are this is the future of video games not only will AI contribute to the rendering and the synthesis of the environment AI will also animate the characters AI will be a very big part of the future of video games this is accelerated Computing used for large language models for large language models basically the core of generative AI this is a 10 this example is a 10 million dollar server and we cost it everything we costed the process we costed all the chips we costed all the network we cost it literally everything and so 10 million dollars gets you nearly a thousand CPU servers and to train to process this large language model takes 11 gigawatt hours 11 gigawatt hours okay and this is what happens when you accelerate this workload with accelerated Computing and so with 10 million dollars for a 10 million dollar server you buy 48 GPU servers it's the reason why people say that GPU servers are so expensive remember people say GPS servers are so expensive however the GPU server is no longer the computer the computer is the data center your goal is to build the most cost effective data center not build the most cost effective server back in the old days when the computer was the server that would be a reasonable thing to do but today the computer is the data center and so what you want to do is you want to create the most effective data center with the best TCO so for 10 million dollars you buy 48 GPU servers it only consumes 3.2 gigawatt hours and 44 times the performance let me just show it to you one more time this is before and this is after and this is now let's take a look at some of these let's take a look at what it can do this here's a prompt and this prompt says hi computext so this is uh we type in the word high computext I'm here to tell you how wonderful stinky tofu is you can enjoy it right here in Taiwan it's best from the night market I was just there the other night okay play it High computex I'm here to tell you about how wonderful stinky tofu is you can enjoy it right here in Taiwan it's best from the Night Market the the only input was words the output was that video okay here's another prompt Taiwanese we give we tell this AI okay we tell this AI this is a Google text to music traditional Taiwanese music peaceful like it's warm and raining in a lush Forest at Daybreak please [Music] [Music] we send texts in AI says hmm okay this music okay here's this one I am here at computext I will make you like me best sing it with me I really like Nvidia okay so this is the word these are the words or these are the words and I say hey hey voice mod could you write me a song these are the words okay play it I am here.com [Music] [Applause] we will of course build these systems as products and so if you would like to have an AI supercomputer we would of course come and install it in your company we also share the blueprints of this supercomputer with all of our Cloud suppliers so that our Cloud Partners so that they can integrate it into their networks and into their infrastructure and we will also build it inside our company for us to do research ourselves and do development so this is the dgx gh200 it is one giant GPU okay what would happen if we applied generative AI to that we have now created a computer I showed you Grace Hopper it can be deployed broadly all over the world easily and as a result every data center every server will have generative AI capability what would happen If instead of decompression streaming and re you know recovering decompression compression decompression what if the cloud performed generative AI capability to it let's take a look picture of wireless and Video Communications will be 3D generated by AI let's take a look at how Nvidia Maxine 3D running on the Nvidia Grace Hopper Superchip can enable 3D video conferencing on any device without specialized software or Hardware starting with a standard 2D camera sensor that's in most cell phones laptops and webcams and tapping into the processing power of Grace Hopper Maxine 3D converts these 2D videos to 3D using cloud services this brings a new dimension to video conferencing with Maxine 3D visualization creating an enhanced sense of depth and presence you can dynamically adjust the camera to see every angle in motion engage with others more directly with enhanced eye contact personalize your experience with animated avatars stylizing them with simple text prompts with Maxine's language capabilities your avatar can speak in other languages even ones you don't know and media AI life is Nvidia Maxine 3D together with Grace Hopper bring immersive 3D video conferencing to anyone with a mobile device revolutionizing the way we connect communicate and applications the world's Enterprise has yet to enjoy the generative AI so far we've been working with csps and this csps of course is going to be able to bring generative AI to many different regions and many different applications and industries the big journey is still ahead of us there are so many Enterprises in the world and everybody because of the multimodality capability that I was mentioning before every industry can now benefit from generative AI there are several things that we have to do number one we have to help the Industries build custom language models you need a software system that understands the laws of physics now you've actually seen this already with chat GPT whereas AI Nvidia AI would use Nvidia Omniverse as in a reinforcement learning Loop to ground itself you have seen chat GPT do this using reinforcement learning human feedback using human feed using humans feedback Chad GPT was able to be developed by grounding it to humans well sensibility and align it with our principles so reinforcement learning with human feedback is really important reinforcement learning for physics feedback is very important let me show you everything that you're about to see is a simulation let's roll it please [Music] um thank you [Music] [Music] foreign [Music] [Music] foreign [Music] [Music] was a simulation Nothing Was art everything was simulation isn't that amazing not only noticed just now it was humans interacting with Omniverse humans interacting with Omniverse in the future Sean will even have a generative Ai and AI interact with him in Omniverse we could of course imagine in the very beginning there was Jinn that could be a character that could be one of the users of Omniverse interacting with you answering questions helping you we can also use generative AI to help us create Virtual Worlds so for example this is a plastic a bottle that's rendered in Omniverse they could be placed in a whole bunch of different type of environments it could render beautifully physically you could place it just by giving it a prompt by saying I would like to put this live these bottles in a lifestyle photograph style backdrop for of modern warm Farmhouse bathroom change the background everything's all integrated and rendered again okay so generative AI will come together with Omniverse to assist the virtual the the creation of virtual worlds today we're announcing that wpp the world's largest advertising agency and advertising services company is partnering with Nvidia to build a Content generation engine based on Omniverse and generative AI it integrates tools from so many different other partners Adobe Firefly for example Getty Shutterstock and it integrates into this entire environment and it makes it possible for them to generate unique content for different users for ad ad applications for example so in the future whenever you engage a particular ad it could be generated just for you but yet the product is precisely rendered because of course the product Integrity is very important and so every time that you engage a particular ad in the future today it was retrieved and today the Computing model when you engage information it is retrieved in the future when you engage information much of it will be generated notice the Computing model has changed wpp generates 25 percent of the ads that the world sees sixty percent of the world's largest companies are already clients and so they made a video of how they would use this technology the world's Industries are racing to realize the benefits of AI Nvidia and wpp are building a groundbreaking generative AI enabled content engine to enable the next evolution of the 700 billion dollar digital advertising industry built on Nvidia Ai and Omniverse this engine gives Brands the ability to build and deploy highly personalized and compelling visual content faster and more efficiently than ever before the process starts by building a physically accurate digital twin of a product using Omniverse Cloud which connects product design data from industry standard tools then wpp artists create customized and diverse virtual sets using a combination of digitized environments and generative AI tools by organizations such as Getty Images in Adobe trained on fully licensed data using Nvidia Picasso thank you foreign this unique combination of Technologies allows wpp to build accurate photorealistic visual content and e-commerce experiences that bring new levels of realism and scale to the industry marketing I'm going to show you now how we're going to use Omniverse and AI here in Taiwan and we're going to use it for manufacturing manufacturing as you know is one of the largest Industries in the world we're going to use Omniverse to teach an AI and then we're going to use Metropolis our AI deployment Edge deployment system to deploy the AI okay run it the 45 trillion dollar Global manufacturing industry is comprised of 10 million factories operating 24 7. Enterprises are racing to become software defined to ensure they can produce high quality products as quickly and cost efficiently as possible how Electronics manufacturer pegatron uses Nvidia Ai and Omniverse to digitalize their factories in Omniverse they start by building a digital twin of their Factory unifying to Spirit 3D and CAD data sets to provide a real-time view of their complex factory data to their planners and suppliers in the cloud native digital twin planners can then optimize layout virtually before deploying changes to the real factory the digital twin is also used as a training ground and data Factory for pegatron's perception AIS they use Nvidia Isaac Sim built on Omniverse to simulate and optimize their Fleet of mobile robots which help move materials throughout the facility as well as the pick and place robotic arms that assist on production lines in the fully operational Factory pegatron deploys automated Optical inspection or aoi points along their production lines which reduces cost and increases line throughput Nvidia Metropolis enables pegatron to quickly develop and deploy Cloud native highly accurate aoi workflows across their production lines Omniverse replicator generates synthetic data sets of pcba defects which are too complex and costly to capture in the real world like scratches and missing or misaligned components pegatron then combines the synthetic data with Nvidia pre-trained models Nvidia Tau for training adaptation and optimization and Nvidia deep stream for real-time inference resulting in aoi performance that is 99.8 accurate with a four times Improvement in throughput software-defined factories built on Nvidia Ai and Omniverse manufacturers can super accelerate Factory bring up and minimize change orders continuously optimize operations maximize production line throughput all while reducing costs [Music] it includes starts with the chip called Oren it goes into a computer and it goes into the Nvidia over Nova orange which is a reference system a blueprint for amrs this is the most advanced AMR in the world today and that entire stack has been built and let's take a look at it to improve productivity and increase worker safety factories and warehouses are migrating away from manual forklifts and guided vehicles to full autonomy Nvidia Isaac AMR provides an integrated end-to-end solution to deploy fully autonomous mobile robots the core of the solution is Nova Oren a sensor suite and Computing Hardware that enables mapping autonomy and simulation Nova's collection of advanced sensors speeds the mapping process leveraging our cloud-based service to generate an accurate and detailed 3D voxel map this 3D map can then be sliced across a plane to generate 2D Maps tailored for different autonomous robots that might operate in a facility with these maps in place on robot lidar or cost-effective cameras provide autonomous navigation that works reliably in the most complex and dynamic environments Isaac Mission Control optimizes route planning using the coop library to improve operations developers can use Isaac Sim and Nvidia Omniverse to create realistic digital twins of the operating environment this allows fully autonomous robots to be trained on complex tasks entirely in simulation all operations can be fully validated using Isaac Sim before deployment to the real world Isaac AMR accelerates your migration to full autonomy reducing costs and speeding deployment of the next generation of amrs we put Nvidia into Cloud so that every Enterprise in the world can engage us to create generative AI models and deploy it in a secure way hyper in a Enterprise grade Enterprise secure way in every single cloud and lastly we would like to extend AI to the world's heavy Industries the largest Industries in the world so far our industry our industry that all of us and part of has been part a small part of the world's total industry for the very first time the work that we're doing can engage every single industry and we do that by automating factories automating robots and today we even announced our first robotics full reference stack the Nova Oren I want to thank all of you for your partnership over the years thank you [Applause]
Info
Channel: SlantTrends
Views: 12,291
Rating: undefined out of 5
Keywords: Nvidia, AI breakthrough, Machine learning, Artificial intelligence, Innovation, Technology, Future of AI, Nvidia hardware, AI applications, Deep learning, Neural networks, Data processing, GPU technology, Computer vision, Robotics, SlantTrends, nvidia's new ai, nvidia's new ai revealed, jensen huang
Id: 69h9qzluZ2w
Channel Id: undefined
Length: 28min 53sec (1733 seconds)
Published: Sat Jun 03 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.