5 MINUTES AGO: ELON MUSK STUNS Everyone With Statements On X.AI (Exclusive Elon Musk Interview)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so it's officially a game over what Elon Musk just said in an exclusive Twitter space which he just ended around 12 minutes ago proves to us that his new company x dot AI will be one of those companies that is extremely Innovative and unlike other AI companies that are currently focused on the traditional level of chatbots x dot AI is going in a completely new Direction now some of you may have understood that he did announce this three days ago but in this recent Twitter space he actually gave us a much more detailed breakdown in which he talks about x dot Ai and what he sets out to achieve before we get into Elon musk's actual post you can see that a couple of days ago on his company website he actually posted that today we announced the formation of x dot AI the goal of extra AI is to understand the true nature of the universe the team is led by Elon Musk the CEO of Tesla and SpaceX and they've previously worked at companies like deepmind Google research Microsoft research Tesla and the University of Toronto now if you aren't familiar with those names a lot of those names are key key companies slash organizations and which have led the many breakthroughs in artificial intelligence and many of the new projects that I Do cover come from those prestigious organizations so the team that are going to be working on extra AI are definitely some talented individuals like one of the most talented teams that I've ever seen now what I do find interesting and something that I do want to add before we get into Elon musk's talk is that he actually talks about two main points okay and one of the things that Elon Musk talks about is the fact that their goal is a truth seeking AI now this means that it's going to be fundamentally designed a little bit different because as you know many AIS have inherent biases in them for example they are told not to talk about certain things or Give opinions on certain facts whereas x dot AI is going to be giving a truthful opinion just based on pure data so that means that this kind of official intelligence is going to be built with a different set of guardrails which will be interesting to see how it performs in certain human evaluation tests and that is going to be one of the most interesting things I do think we'll see because it isn't just a consumer product unlike chat gbt and Bard and of course other things like perplexity or Claude those kinds of AIS are just normal chatbots that you're trying to talk to whereas this one is just going to be a lot more ruthless in its approach to just seeking the truth which means that we're largely going to be getting an entirely different type of AI from Elon Musk I guess the overarching goal of xai is to build a a good AGI with the overarching purpose of just trying to understand the universe the I think the safest AI the safest way to build an AI is actually make one that is maximally curious and Truth seeking so you go for try to Aspire to the truth with acknowledged error so like you know this will never actually get fully to the truth it's not clear but anyone should always aspire to that and try to remember is the error between what it what you think is true and what is actually true the my sort of theory behind the actually curious maximally truthful as being probably the safest approach is that I think to a super intelligence humanity is much more interesting than not Humanity you know what one can look at the various planets in our solar system the moons and the asteroids and really probably all of them combined are not as interesting as Humanity I mean as people know I'm a huge fan of Mars The Next Level I mean it's middle name of one of my kids is basically the Greek word for Mars so I'm a huge fan of Mars but Mars is just much less interesting than Earth with humazine and so I think that that kind of approach to growing an AI I think that is the right word for growing and is to grow it with that ambition I spent many years thinking about AI safety and worrying about AI safety and I've been one of the strongest voices calling for AI regulation or oversight just have some kind of oversight some kind of referee so it's not just up to companies to decide what they want to do I think there's also a lot to be done with AI safety with industry cooperation kind of like Motion Pictures Association so this I think there's value to that as well but I do think there's got to be some like in any kind of situation that is even if it's a game they have referees so I think it is important for there to be regulation and a bit and then like I said my view on safety is like try to make it maximally curious maximally true seeking and I think this is important that you to avoid the inverse morality problem like if you try to program a certain morality you can have the you you can basically invert it and get the opposite what it sometimes called the world Luigi problem if you make Luigi you risk creating Waluigi at the same time so I think that's a metaphor that a lot of people can appreciate and so that's what we're going to try to do here and yeah with that I think we'll return it over to you so with the next part Elon Musk actually talks about what he's going to be doing in terms of the actual mission statement and I think it's very interesting that Elon Musk talks about how fun fundamentally he wants to answer one at least have one breakthrough question which means that this is going to be radically differently designed than our traditional AIS as we stated before and this could be the AI that does manage to solve some of Humanity's most pressing questions like why are we here and of course where are all the aliens as Elon Musk talks about in this network I understand the universe is the entire purpose of physics yeah so I think it's actually really clear we there's just so much that we don't understand right now or we think we understand but actually we don't in reality so there's there's still a lot of unresolved questions that are very extremely fundamental you know this whole Dark Matter Dark Energy the thing is really I think an unresolved question you know we have the standard model which is proved to be extremely good at predicting things very robust but still many questions remaining about the nature of gravity for example the the Fermi Paradox of where are the aliens which is if we are in fact almost 14 billion years old why is there not massive evidence millions and people often ask me since I am obviously deeply involved in space that you know if anyone would know about or would have seen evidence of aliens is probably me and yet I have not seen even one tiny shred of evidence for aliens nothing zero and I would jump on it in a second if I saw it so you know that means like I don't know there's but there are many explanations for the Fermi Paradox but which one is actually true or maybe none of the current theories are true so I mean the furry paradoxes which is really just like where the hell of aliens is part of what gives me concern about the fragility of civilization and Consciousness as we know it since we see no evidence thus far of it anywhere I've tried hard to try to find it we may actually be the only thing at least in this galaxy or this part of the Galaxy if so it suggests that what we have is extremely rare and I think it certainly be wise to assume that Consciousness is extremely rare it's worth noting before the evolution of Consciousness on Earth that we're about for Earth is about four and a half billion years old the sun is gradually expanding it will expand to heat up Earth to the point where it will effectively boil the oceans you'll get a runaway your next level Greenhouse Effect and Earth will become like Venus and that may take as little as 500 well I mean as little as 500 million years so you know the sun doesn't need to expand to envelop Earth it just needs to make things hot enough to increase the water vapor in the air to the point where you get a runaway greenhouse effect so so for frog even sake it could be that if life if Consciousness had taken 10 longer than Earth's current existence it wouldn't have developed at all so from on a cosmic scale this is a very narrow window anyway so there are all these like fundamental questions I I don't think you can call anything AGI until it was sold at least one fundamental question because humans have solved many fundamental questions or substantially solved them and so if if the computer cancels even one of them I'm like okay it's not as good as humans that would be one key threshold for solve one important problem you know where's that Riemann hypothesis solution I don't see it so that it would be great to know what the hell is really going on essentially so I guess you could reformulate the xai mission statement as what the hell is really going on that's our goal now if you found that interesting I really want you to pay attention to this next part because this is where Elon musk's genius comes into play he actually told us that in Tesla they managed to figure out something key in AI development that many other people haven't and they couldn't believe it when they figured it out because it was so strikingly simple so take a listen to this because I really do believe that this company is about to have some of the major breakthroughs in artificial intelligence we're not going to understand the universe and not tell anyone so yeah I mean when I think about like neural networks today it's currently the case that you if you have 10 megawatts of which really should be renamed something else because there's not no Graphics there but if you get 10 megawatts of gpus cannot currently write a better novel than a good human that any good humans using roughly 10 watts of a higher order brain power so not counting the basic stuff to you know operate your body so so we've got a six order magnitude difference that's a giant that's really gigantic part of the you could I think one could argue that two of those orders of magnitude are explained by the activation energy of a transistor versus a calculator account for two of those orders of magnitude but what about the other four or the fact that even with six orders magnitude you still cannot beat the smart human writing a novel so and and what also today when you ask the most advanced ai's technical questions like if you're trying to say like how to design a better rocket engine or complex questions about electrochemistry to make up the better battery you just get nonsense so that's not not very helpful so I think this we're really missing them off in the way that things are currently being done by many orders of magnitude it's being heavily I mean basically AGI is being brute forced and still actually not succeeding so if I look at the experience with Tesla what we're discovering over time is that we actually over complicated the problem I can't speak to in too much detail about what what Tesla's figured out but except to say that in Broad terms the answer was much simpler than we're thought we were too dumb to realize how simple the answer was but you know over time we get a bit less dumb so I think that's what we'll probably find out with the AGI as well you know kind of really embryonic at this point so it'll take us a minute to really get something useful but I go with to be to make you know useful AI I guess like if you can't use it in some way I'm like that question it's value so so if we wanted to be a beautiful tool for for people and consumers and businesses or whoever and you know what as was mentioned earlier this I think there's some value in having multiple entities you don't have a unit polar world where just one company kind of dominates AI you want to have some competition competition I think makes companies honest and you know so or in favor of competition or text training and arguably I think also for physio for image and video training as well at a certain point there's you you kind of run out of human created data so if you look at say the alphago versus Alpha zero you can alphago trained on all the human games and be at least at all four to one alpha zero just played itself and beat alphago 100 to zero so there's really for things to take off in a big way I think you've got the ai's got to basically generate content self-assess the content and that's really the I think that's the path to AGI is something like that is is self-generated content where it effectively plays against itself the you know that a lot of AI is data curation it's like shocking it's not like vast numbers of lines of code it's actually shocking how small the lines the code are and blows my mind a few lines of code there are but how the data is used what data is used the signal noise of that data the quality of that data is immensely important but it kind of makes sense if you were trying to as a human trying to learn something and you were just given a pilot you know a vast amount of you know drivel basically you know versus high quality content you're going to do better with a small amount of high quality content than a lot of large amount of dribble it makes sense you know like reading the greatest novels ever written is is way better than reading a bunch of sort of crappy novels so yeah thanks to not say what it actually thinks is true so I think you know we're really at xai we have to allow the AI to say what it really believes true not and not be deceptive or politically correct so that you know that will result in some criticism obviously but but I think that's the only way to go forward is rigorous pursuit of the truth or the truth with least amount of error so and I am concerned about the way that the AI in that it is optimizing for political correctness and that's incredibly dangerous you know if you look at the you know where did things go wrong in Space Odyssey it's you know basically when they told HAL 9000 to lie so they said you can't tell the crew what that they're going but anything about the monolith or that they're or what their actual mission is and but you've got to take them to the model so it you know basically came to the conclusion that well it's going to kill them and take their bodies to the monolith so this is the I mean the lesson there is did I give the AI usually impossible objectives basically don't force the AI to lie now the thing about physics or the truth universities you actually can't invert it you can't just like physics is true there's not like not physics so if you're adhered to hardcore reality I think you can it actually makes inversion impossible now you can also say that you know when something is subjective I think you can provide an answer which says that well if you believe the following then this is the answer if you believe you know this other thing then this is the answer because it may be a subjective Square question where the answer is fundamentally subjective and a matter of opinion so so but I think we it is very dangerous to grow and a and teach it to life you know in our meetings with them that if you do make a digital super intelligence that you could end up that could end up being in charge you know so you know that I think the CCP does not want to find themselves subservient to a digital super intelligence they that argument did resonate yeah so yeah so some kind of regulatory Authority that's International obviously enforcement is difficult but I think we should still aspire to do so the sun revolves around the earth because everyone thinks that that does make it true you know if a Newton or Einstein comes up with something that is actually true it doesn't matter if all the other physicists in the world disagree it's the reality is reality so it has to you have to ground the answers in reality yeah the current models just imitate the data that they're trained on and what you really want to do is to change the Paradigm away from that to actually models discovering the truth so not just you know repeating what they've learned from the training data but actually making true new insights new discoveries that we cannot benefit from parent and safety like it's you know like my prediction for AGI would roughly match that which I think as well at one point said 2029 that would rough that's roughly my guess to give or take a year so if it takes like an additional six months or 12 months for AGI that's really not a big deal if it's you know like spending a year to make sure AGI is safe probably worthwhile you know if that's what it takes but I wouldn't expect it to be a substantial slowdown yeah and I can also add that like understanding the inner work of advanced AI is probably the most ambitious projects out there as well and also aligns with xas mission of understanding the universe and probably not possible for an aerospace engineers to build a safe rocket they don't understand how it works and that's the same approach we want to take AI for the our safety plans and as the AI advances across different stages the risk also changes and it will not be fluid across all the standards AI you and I for the book also discuss the importance of real world AI which is the things including coming out of both Optimus and Tesla FSD to what extent do you see XI involved in real world AI as a distinction to what say open AI is doing and you have a leg up to some extent by having done FSD yeah right I mean Tesla is the the leader I think by pretty long margin in a real world AI in fact the degree which Tesla is Advanced real AI is not well understood yeah and I guess since I've spent a lot of time with the Tesla AI team I kind of know you know how real world AI is done and there's lots to be gained by collaboration with Tesla you know I think bi-directionally xci can help Tesla and vice versa you know we have some collaborative relationships as well like our Material Science team which I think is maybe the best in the world it is actually shared between Tesla and SpaceX and that that's actually quite helpful for recruiting the best engineers in the world because it's just it's like more interesting to work on Advanced electric cars and Rockets than just either one or the other so like that was really key to recruiting Charlie Coleman who runs the Advanced Materials team it was he was like he was at Apple and I think pretty happy at Apple and be like well he could work on electric cars and Rockets he's like that sounds pretty good he wouldn't take either one of the jobs but he's willing to take both yeah so I think that is a really important thing and like I said they're like some pretty pretty big insights that we've gained at Tesla and trying to understand real real world AI taking taking video input and compressing that into a vector space and then ultimately into steering and pedal outputs yeah and uh Optimus yeah afterwards that you know after this is close to early stages but and we definitely need to be very careful with Optimus at scale once it's a production that you have a hard-coded way to turn off Optimus for obvious reasons I think you'd like it there's got to be a hard-coded wrong local cutoff that can that you can no no amount of updates from the internet can change that so so we'll make sure that Optimus is like quite easy to shut down extremely important because at least if the car is like intelligent well at least you can climb a tree or go up some stairs or something you know go in a building but Optimus can follow you in the building so any kind of robot that can follow you in the building and that is intelligent and connected we got to be super careful with the safety
Info
Channel: TheAIGRID
Views: 185,433
Rating: undefined out of 5
Keywords:
Id: Xd8fNiySYLQ
Channel Id: undefined
Length: 17min 38sec (1058 seconds)
Published: Sun Jul 16 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.