I'm Investing in Google's HUGE Robotics Breakthrough

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Innovations and artificial intelligence are happening at an almost alarming rate AI is making its way into every part of our digital lives from chat gbt and other text-based AI co-pilots to content creation tools like Adobe mid-journey Runway ML and countless others but the one place I didn't expect AI to reach so fast was the physical world well that's all about to change because the same technology that lets chat GPT learn more and more is now also letting physical robots learn more and more as well so in this episode I'll show you a big breakthrough in the way that robots learn just how close we might be to artificial general intelligence and which companies could be the biggest winners as a result your time is valuable so let's get right into it Elon Musk has often said that Tesla's self-driving AI might play a role in artificial general intelligence or AGI in fact just this past week Yvonne said that Tesla may have figured out some aspects of AGI already saying quote the car has a mind not an enormous mind but a mind nonetheless end quote I didn't really agree with this at first because there's a big difference between being able to navigate through an environment and being able to actually understand it but then I rewatched Tesla's last two AI days and I'm starting to understand the bigger picture here let me show you labeling is the step where AI tries to figure out what every object is that it can see is it a road that the car can drive on label it purple is it a sidewalk or some other flat surface that the car probably shouldn't drive on Fable it read is this another car wavelet blue it's important to get these labels right because a computer can't really decide what to do if it doesn't know what it's looking at the thing is the labeling step actually isn't too different from how people learn to recognize different objects in their environments and this process is useful for much more than just navigation another example is how Tesla's AI divides its surroundings into tiny cubes and then decides which ones to pay attention into and which ones it can ignore the full self-driving software assigns each cube of color based on the potential movement of whatever is in the cube so the tan cubes might contain buildings or mailboxes or lamp posts but they're all tan because those volumes are occupied with things that will not move red volumes are occupied with things that could move like a pedestrian or a parked bus and once the bus in the scene starts to move it turns blue which indicates that the Tesla FST software is predicting that it will continue to move breaking the world up into cubes like this lets Tesla spend more computer power on the volumes that matter like the ones colored in red or blue while not wasting resources trying to predict the motion of things that won't move at all which is the things in these tan cubes then Tesla's AI predicts the paths of everything in the red and blue volumes and adjusts its own route in real time accordingly so for example if somebody is running a red light which happens all the time here in Florida Tesla's AI recognizes that they could cross through our own lane so as soon as this car's nose starts to turn into our lane the Tesla applies some brakes and then avoids the car or what about if somebody is parked in the left lane of a busy road another Florida classic but because Tesla's AI understands that the car on the left is parked and not just momentarily stopped it knows to switch lanes instead of staying behind the parked car forever even though there's a person slowing down on the right lane as well deciding what we need to pay attention to and what we can ignore and predicting the motion of others and adjusting our own motion accordingly are more examples of the kind of thinking that we do as humans these kinds of perception and navigation tasks can be generalized for other form factors like Tesla's Optimus bot which we were shown during the last AI day as well for example Tesla showed the bot labeling different Services based on whether or not it could navigate across them but what isn't clear is how these same full self-driving tasks would General analyzed to other kinds of tasks altogether like manipulating physical objects Gathering and using information from the internet and so on these are completely separate sets of skills from navigation so being good at one doesn't mean you can even attempt to do the others and that's where Google's most recent breakthrough in robotic learning comes in a couple weeks ago Google deepmind showed off rt2 which stands for robotics Transformer 2. the Transformer is also the kind of neural network powering chat GPT GPT stands for generative pre-trained Transformer and the special thing about rt2 is that it allows robots to understand and execute commands that they've never been given before by doing some actual basic reasoning just like chat GPT does when you give it a prompt except for a robot this might mean choosing the best tool for a specific task or making judgments about specific objects in my opinion what we're seeing here is another step closer to artificial general intelligence in deepmind's recent blog post on rt2 they said that quote rt2 shows improved generalization capabilities and semantic and visual understanding beyond the robotic data it was exposed to we also show that incorporating Chain of Thought reasoning into rt2 allows it to perform multi-stage semantic reasoning like deciding which object could be used as an improvised hammer or which type of drink is best for a tired person you'd expect these kinds of crazy Innovations to come from startups but they're coming from some of the biggest companies in the world Tesla Google Nvidia and so on and speaking of Tesla Nvidia and Google MooMoo is a trading app built in Silicon Valley to help investors execute winning strategies for example check out how easy it is to track top Wall Street analysts for every stock if I go to nvidia's page and click on analyst ratings I can see every institution and individual analyst covering the stock here's a hot tip for you Marco pakis has a crazy high success rate on his price targets for Nvidia stock and his average return is nearing a hundred percent so now with just the push of a button I can follow Mark and get notifications whenever he Issues new price targets and I can go and do this for every single stock I follow but that's not even the best part right now MooMoo is giving away up to 16 free stocks each valued at up to two thousand dollars plus a fifty dollar cash reward and if you're from Canada you could win an Amazon gift card worth up to a hundred dollars all you need to do is download the app using my link keep your funds at that level for at least 60 days and enjoy up to 16 free stocks but this offer ends soon so make sure to get started today alright just like Tesla's self-driving AI can come up with and execute a strategy to navigate from point A to point B rt2 enables robots to do the same but for hundreds of other tasks with up to a 97 success rate that's almost on par with most humans here's an example let's say we want to to ask a robot to throw away trash to you and me we know what that task entails but to most AI models you'd have to specifically teach the robot what counts as trash then train it to pick up the trash then go through the process of navigating to the trash from its current location and then actually throwing the piece of trash away while rt2 can use data from the web to learn what trash typically means and then identify it without being specifically trained on hundreds of thousands of different examples it even has an idea of what it means to throw trash away even though it wasn't explicitly trained on those words or what they mean in this context just like we call chat GPT a large language model or llm there are also Vision language models or vlms vlms are trained on web data which makes them good at recognizing Visual and language patterns that can show up in stories images and videos rt2 takes that one step further rt2 is a vision language action model or vla it combines Knowledge from the web and Robotics data to create generalized instructions for robotics controls based on natural language prompts so for example it can take the same kinds of inputs as chat GPT but instead of outputting text for humans to read it outputs instructions for a specific robot to follow and again the magic here is that those natural language prompts don't need to be exact you can ask the robot to put a strawberry in the correct bowl and the robot will figure out what the word correct means in this context in this case it's the bowl that already has the strawberries in it but we didn't have to tell that to the robot the robot reasoned it out for itself and just like Tesla's full self-driving software can drive on roads that it's never been on with cars and backgrounds that it's never seen rt2 can complete a wide variety of tasks with objects that the model has never seen before with backgrounds it's never seen before in environments it's never seen before and that's starting to feel a lot like artificial general intelligence to me just imagine a robot with the mobility of Atlas from Boston Dynamics with the navigation capabilities of Tesla's full self-driving software and the reasoning capabilities of something like chat GPT I honestly don't think we're too far away from that let me know down in the comments whether you think that's scary exciting or both speaking of which bring all these ideas together is exactly what openai is trying to do with the robotics company called 1X Technologies and their neobot earlier this year open AI startup fund what a 23.5 million dollar series a funding round for 1X which they'll use to build a bipedal Android called Neo and a wheeled Android called Eve these robots appear to be direct competitors to the Tesla bot but these are general purpose Androids that we'll learn through perception just like we do instead of explicit training now I did a little digging and I found out that 1X Technologies used to be called hello the Robotics and holody Robotics was a member of the Nvidia Inception program which is nvidia's startup accelerator one of the benefits of haloti Robotics being in this program is that they got access to nvidia's robotics platforms like Omniverse Jetson Isaac AMR and Isaac Sim these are the same platforms that some of the biggest companies on Earth use for their fleets of robots Isaac AMR is their Hardware sensor and software platform for autonomous mobile robots it comes with a wide variety of crowd to Edge services for mapping autonomy and even simulation for example the sensor data from the platform can be used to create 3D maps of the robot's environment which can then be sliced at different heights depending on the height of the robot needing the map and Isaac Sim is the part of the Omniverse that focuses on photorealistic and physically accurate virtual environments for training AI models before embedding them into a real robot Amazon robotics has the biggest Fleet of mobile and industrial robots in the world and they use Isaac Sim to train their robots and simulations before deploying them in real fulfillment centers so the eve and the neobots from 1X Technologies are not only built on nvidia's robotics platforms but they probably also have access to the relevant parts of open ai's massive Transformer based models as well don't forget gpt4 is a multimodal model that can understand images and Text data just like the robot models we've been discussing in this episode and just in case you think this is all still very far away 1X robotics is taking pre-orders for the neobot later this year and that leads me to my last and most important point I think we're going to start seeing these Cutting Edge Frontier AI models start to improve each other faster and faster when there's a big Improvement to gpt4 saying how it interprets images that research can also help improve rt2 and when AI researchers find better prompt structures or better ways to handle Chain of Thought reasoning that will improve these robot models not just chat GPT and that's already on top of the massive leaps in performance that we're seeing in Hardware every year both in terms of server GPU performance and accelerators at the edge which means bigger AI models or faster training of same-sized models in the near future and the thing that surprises me the most is seeing all these massive Innovations in Hardware software and AI models come out of huge companies like Google and video and I'm sure we'll see even more amazing things at Tesla's next AI day as well even though chat GPT and generative AI copilots are taking the online industry by storm today I think we'll see a big rise in physical robots sooner than most people expect not just for heavy industry applications but in areas like healthcare Hospitality retail Food Services and so on I don't think any part of this upcoming robot Revolution is priced into these companies at all yet but to me this is a future worth investing in and if you feel I've earned it consider hitting the like button and subscribing to the channel that lets me know to put out more research like this and if you want even more science behind the stocks I'm putting together a newsletter with everything that I'm researching and the stocks that I'm following as a result I'll leave a link to that in the description below if you're interested it'll be completely free and I'll never spam you I'm way too lazy for that either way thanks for watching and until next time this is ticker symbol you my name is Alex reminding you that the best investment you can make is in you
Info
Channel: Ticker Symbol: YOU
Views: 84,927
Rating: undefined out of 5
Keywords: nvidia, nvda, nvidia stock, nvda stock, nvidia gtc 2023, jensen huang, gtc keynote, nvidia keynote, openai, chatgpt, gpt4, msft, microsoft stock, msft stock, goog, googl, goog stock, google stock, artificial intelligence stocks, semiconductor stocks, gpt-4, stable diffusion, nvidia news, nvidia 2023, ai copilot, gpt5, nvda stock news, tsla, tesla stock, tsla stock, runway ml, runway ai, google deepmind, google rt-2, robotics stocks, robot stocks, ai stocks, moomoo, moomoo trading
Id: s-pInYFNytY
Channel Id: undefined
Length: 13min 34sec (814 seconds)
Published: Sun Aug 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.