What Is Q*? The Leaked AGI BREAKTHROUGH That Almost Killed OpenAI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
qar is the AI breakthrough that almost killed open AI it was frightening enough for lead open AI researchers to write a letter of concern to the board which likely precipitated the firing of Sam Alman but here's the thing only a handful of people within open AI know exactly what qar is since bits and pieces of information about qar have leaked online and army of Internet slews including AI researchers practitioners hobbyists like myself have I've been trying to figure out what exactly is qar I've been scouring the internet for every bit of information that I can find about qar and that's what we're going to be talking about today so first let's look what led up to the leak of qar check out these videos from a few weeks ago before Sam Alman was fired where he discusses being in the room where a major AI breakthrough has occurred and on a personal note um like four times now in the history of open the the most recent time was just in the last couple of weeks I've gotten to be in the room um when we sort of like push the front the sort of the veil of ignorance back and the frontier of Discovery forward and getting to do that is like the professional honor of a lifetime I think there's a real Moment In Fear which is like is this a tool we have built or a creature we have built what what not going to me so those are pretty interesting videos from just a couple weeks ago in one he's talking about him being in the room where a massive Discovery has happened that gives us a giant leap forward in artificial intelligence and in the other one he's talking about whether one of these creations is a tool or a creature that they've created shortly after these talks Sam Alman was unceremoniously fired from open AI before a crazy weekend of back and forth infighting between Sam Alman the board and the employee base to this day we still don't know exactly why Sam Alman was fired but it does seem to be related to the discovery of qar and his desire to commercialize it versus the board's desire to slow down and figure out the safety first I'm not going to go too deep into the news around the firing and what happened because I already created multiple videos about it I'll link all of those in the description below but one thing we do know is the open AI board and at least some of its top researchers were so scared about this discovery that they were willing to shut down the company because they still believed it was in line with their mission to create AGI in a safe way so they were willing to literally squander billions of dollars of value because they thought that was a better outcome than releasing this technology and if that doesn't scare you just a little bit I don't know what will and then suddenly we started hearing Rumblings about qar which seems to have confirmed the rumor that this is the AI breakthrough that might be the Prelude to AGI let's take a look at this article by Rutter so this is a really important article open AI researchers warned boor of AI breakthrough ahead of CEO ouer sources say and let's read a few excerpts from it ahead of open AI CEO Sam alman's 4 days in Exile several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten Humanity two people familiar with the matter told Rutters the sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing among which were concerns over commercializing advances before understanding the consequences now Rutter was unable to see a copy of the letter so this is pretty much just secondhand information at this point and here it says after being contacted by Rutter open AI which declined to comment acknowledged in an internal message to staffers a project called qar in a letter to the board before the weekend's events an AI spokesperson said that the message sent by longtime executive Mira moradi who is CTO and was tapped to be CEO when Sam Alman was fired alerted staff to certain media stories without comment commenting on their accuracy some at open AI believe qar could be a breakthrough in the startup search for what's known as artificial general intelligence and here the article starts to hint at what the actual breakthrough could be and I've done a ton of research so I'm going to talk about this more in depth in a bit but given vast Computing resources the new model was able to solve certain mathematical problems now that might not sound like a big deal but when I explain why that is so important later on you're going to understand how important this truly could be and here it says currently generative AI is good at writing and language translation by statistically predicting the next word and answers to the same questions can vary widely but Conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence now that is a really important piece of information let's remember that for later on in the video and here's the quote from the video we just watched four times now in the history of open AI the most recent time was just in the last couple weeks I've gotten to be in the room when we sort of push the veil of ignorance back and the frontier of Discovery forward and getting to do that is the professional honor of a lifetime but of course open AI is denying a lot of this and this is Alex Heath who's an editor at The Verge am hearing from multiple sources including an AI spokesperson that the Rooter report Sam alman's ouster at open AI was precipitated by the letter to the board about AI breakthrough is not true so once again what act is qar I found speculation online that it could be the architectural breakthrough on the level of Transformers and if you don't remember Transformers which was published by Google Deep Mind in about 2017 was the underlying technology that powers all of our large language models today including open AI GPT 4 but Transformers are limited in a lot of ways which I'll discuss in a moment so some say as mentioned this qar breakthrough could be the ability of a AI to create and comprehend mathematical proofs and not just predict the next token in a series of tokens which is really all that Transformers does today I've also seen reports that this is the ability for artificial intelligence to create its own synthetic data set to further train itself on and I've even seen reports that this might be open AI integrating Alpha go like self-learning techniques into a large language model so let's take a look at each of these and I'm going to break them down all right first let's talk about reasoning logic and truly being able to create mathematical proofs and what that requires is an understanding of the mathematical proof itself not just predicting the next word in a sentence and llms today still don't reason very well if you've watched any of my llm test videos you already know that many of them fail logic and reasoning problems and the ones that do pass don't actually understand why that logic and reasoning is valid all all they're doing is paring back what's in their training set it's like being able to see 2+ 2 and know that the next two characters is equals 4 but not actually knowing why 2 and 2 equals 4 so if you've seen any of my llm test videos you know I have a problem that is Jane runs faster than Joe Joe runs faster than Sam does Sam run faster than Jane and the answer is no because of the transitive property if a is bigger than B and B is bigger than C then C cannot be bigger than a and there mathematical proofs as to why this is true but the models don't understand why they can just read the text and understand that the most likely next character to reply with is going to be the right answer but what if models were actually able to understand this so here's Peter louu a research scientist on the Google Deep Mind team and what he says is sounds like open AI got some good numbers on gsm 8K possibly math basically solving math problems and actually understanding why the answers are are true or false speculating but there is a star in star and I'm going to show you this research paper in a second a technique that F Tunes a model to its own better outputs which some people see as self-improvement so here's why he thinks this could be related to Q star the actual name of the paper is star bootstrapping reasoning with reasoning so this paper was published by Google and Stanford University back in May 2022 which might as well be 10 lifetimes ago in the world of AI but it's still an extremely relevant paper today and here it talks about generating step-by-step Chain of Thought rationals improves language model performance on complex reasoning tasks like mathematics or common sense question answering and there's already another paper about Chain of Thought and if you're not familiar with Chain of Thought it just means that you're prompting the large language model to reason about intermediate steps instead of going right to the more complex end solution so as an example if you give it a difficult math problem rather than just saying figure out what this math problem is you can tell it to you can tell it to use pemos and work through each part of the math problem before giving you the final result and what I found from my llm test is that actually produces much more accurate answers and here it says inducing language model rationale generation currently requires either constructing massive rationale data sets or sacrificing accuracy by using only few shot inference we propose a technique to iteratively leverage a small number of rationale examples and a large data set without rationals to bootstrap the ability to perform successively more complex reasoning basically they're talking about fine-tuning a model where it knows how to do this step-by-step reasoning this technique the self-taught Reasoner star relies on a simple Loop generate rationals to answer many questions prompted with a few rationale examples if the generated answers are wrong try again to generate a rationale given the correct answer fine-tune on all the rationals that ultimately yield correct answers repeat we we show that star significantly improves performance on multiple data sets compared to a model fine-tuned to directly predict final answers and performs comparably to fine-tuning a 30 times larger state-of-the-art language model on Common Sense QA and this makes a lot of sense the way the human brain works when we hear a problem we don't automatically jump to the end solution especially for more complex problems what we do is we break them down into smaller chunks solve those and build that up into what becomes the bigger solution and there's no better example of this than coding when someone gives you a project to build using Code you don't just code the entire thing all in one go you code small pieces you create methods that solve small parts of the bigger problem and all of those individual pieces build up into what becomes the final deliverable check out this graph here's the example what can be used to carry a small dog and the answer choices are a swimming pool a basket a dog show a backyard and own home so the rationale or the the step-by-step way of achieving the answer is the answer must be something that can be used to carry a small dog baskets are designed to hold things therefore the answer is basket so we have the language model we have the rationale generation we have a correct answer and if that's true it goes to the question rational answer which fine-tunes the large language model if the answer is false we give it a hint and then we do the same thing rationale answer goes to question rationale answer and then back into the language model and so this paper is called star so this is is very telling as to what might be a part of the qar Breakthrough now that wasn't the only paper to talk about intermediate steps of reasoning before reaching this Final Answer here is a paper let's verify step by step by open Ai and here it says in recent years large language models have greatly improved in their ability to perform complex multi-step reasoning however even state-of-the-art models still regularly produce logical mistakes to train more reliable models we can turn either to outcome super Vision which provides feedback for a final result or process supervision which provides feedback for each intermediate reasoning step now this sounds a lot like the star paper we just read given the importance of training reliable models and given the high cost of human feedback and this is an important part to remember using human feedback is extremely inefficient it is important to carefully compare both methods and here it talks about our process supervised model solves 78% of problems from a representative subset of the math test set additionally we show that Active Learning significantly improves the efficacy of process supervision now this paper was only from a few months ago May 31st 2023 and this really sounds like part of the qar Breakthrough now here's an article by Nathan Lambert who is a machine learning scientist and researcher and PhD from Berkeley so he probably knows just a little bit about this stuff and he wrote a blog post about what qar could possibly be and right away we can see tree of thoughts reasoning process reward models and supercharging synthetic data so this is a mixture of a few different possibilities of what qar can be so far we've only talked about tree of thoughts reasoning and process rewards models but supercharging synthetic data is our next topic and Incredibly important and here he talks about how so many people on the internet are scrambling to figure out what this thing is such extensive speculation has never unfolded from only the name of a method and I think it's so cool to see this huge Army of people trying to figure out what this is and why I'm especially excited about this is because as we put together what this thing could possibly be we're able to potentially recreate it and implement it in the open source Community the name is pretty simple in this case if real clearly links two core themes from the RL literature Q values and a star a classic Graph Search algorithm yes there's an argument that Q could just refer to the value function of the optimal policy but this would need to be a fabricated Le leak for it to be so silly and open AI has pretty much had everything leaked so fabricating them seems unlikely my initial hypothesis which I clearly labeled as a tin hat theory was a vague merging of q-learning and AAR search so two existing technologies that were merged together to build something potentially Innovative and what I didn't answer is what is being searched over my initial guess of searching over dialogue turns is almost certainly wrong due to infrastructure reasons I'll touch on later and as I've dug into this in more detail I've become convinced that they are doing something powerful by searching over language reasoning steps via tree of thought reasoning so all the stuff we've already talked about but it is much smaller of a leap than people believe the reason for the hyperbole is the goal of linking large language model training and usage to the core components of deep RL that enabled success like Alpha go and we'll talk about alphao in a moment self-play and look ahead planning two things that to this day really haven't been a part of large language model technology so now let's talk about those things self-play this is something that alphago showed is extremely powerful and for those of you who aren't aware alphago is machine learning software from the Google Deep Mind team that was able to not only beat the best go players in the world but far outstrip them selfplay is the idea that an agent can improve its gameplay by playing against slightly different variations of itself because it'll progress ly encounter more challenging situations in the space of llms it is almost certain that the largest portion of self-play will look like AI feedback rather than competitive processes so basically one AI giving another AI feedback on what it's doing well or not doing well and as I mentioned earlier human feedback is extremely inefficient and expensive it's slow it's limited by human capacity and so if you had the ability for one AI to give another AI feedback all of a sudden the limit of how quickly we can give feedback to a model is blown wide open then we have look ahead planning and this is the idea of using a model of the world to reason into the future and produce better actions or outputs the two variants are based on model predictive control which is often used on continuous States and Monte Carlo tree search which works with discrete actions and States so right now large language models cannot look ahead very well and again I'm going to reference my llm test videos one one of the questions I ask is how many words are in your next response and really no model gets this right and it's so simple just say one the answer one or it can write a longer answer and just give me the right answer but it never does and that's because it doesn't have the ability to look forward and actually plan it's just responding with what it thinks is the next most likely token in a sequence of tokens and here he references modular reasoning with llms tree of thoughts prompting now I don't personally think think that tree of thoughts is enough to get us to AGI and of course I'm just an amateur this is just what I'm guessing because this is just enough to be able to respond to logic and reasoning problems much more effectively but doesn't actually give the large language model the underlying ability to understand why certain things are true and false in logic and reasoning and then here he talks about process reward models which we already looked at in the star paper and here's a few papers he references that are all kind of circling the same topic let's verify step by by step which we looked at solving math word problems with process and outcome based feedback scaling relationship on learning mathematical reasoning with large language models and let's reward step by step all of these papers are talking about basically very similar things and now at the end he says putting it together what qar could be qar seems to be using prm's process reward models to score tree of thoughts reasoning data that then is optimized with offline RL reinforcement learning this wouldn't look too different from existing rhf tool tooling that use offline algorithms like DPO or iql that do not need to generate from the llm during training this last step is where the rumored vast Computing resources use AI to label every step with a score instead of humans now here's where it becomes super interesting whereas before each step of the reasoning process might have to be scored by humans now we can actually use AI which means we can do this at scales that were previously impossible now here's the most important part of this article and something I'm going to continue talking about through this video as I've written before AI feedback and constitutional AI are underrepresented in public awareness synthetic data represents the shortest path to expanding data sets synthetic data remember that term we're going to talk about that in a moment all right now let's take a step back for a moment you might be thinking to yourself great AI can solve basic math problems why is that so important well it's not just that today it can solve basic math problems it's what's possible for the future if AI is able to understand mathematical reasoning and actually solve mathematical proofs then a lot of the entire world is going to be appended because math runs the entire world let's think about a few examples first encryption encryption is used throughout the entire internet whether you're talking about websites that you visit payment and checkout information banking information cryptography which is also what powers coin nuclear secrets your telegram messages your email everything is encrypted using math and so all of a sudden if artificial intelligence got really really good at math it could solve all of these things and then all of a sudden Bitcoin would go to zero because there's no cryptography anymore nuclear secrets would be out in the open because you could just have ai break that encryption but not only that math is the basis for everything it is the language of the universe physics chemistry encryption even language is all based on mathematics now here's an expost that I want to go over word on the street is that qar proved P equals MP and the board drama was only a decoy to divert everyone's eyes from 750 open AI employees cashing out to buy a 7-year supply of ammo and groceries now he's saying all of this tongue and cheek obviously but I think this is quite funny now this person mind monkey responded let's read the response from chat GPT why might a proof that P equals MP signal the end of the world the idea that a proof that P equals MP could signal the end of the world is not a widely accepted or scientifically supported notion but let's read why P refers to problems that can be solved in polinomial time while NP refers to problems for which a solution can be verified in polinomial time whether P equals NP or not has profound implications for computational complexity and the efficiency of algorithms so here it says if p P equal MP there's a breakdown in cryptography unleashes unintended pow some hypothetical scenarios suggest that a proof of P equals MP might come with unexpected consequences like the creation of super intelligent entities or the ability to solve complex problems much faster than anticipated ethical dilemmas especially if it involves simulating or manipulating complex systems now we're starting to get into simulation Theory which I'm actually a big believer in and then if all of a sudden we have a way to solve the most complex mathematical proofs then it's starting to look like we are in a simulation and then here he says in a practical sense a proof of P equal MP may be unlikely but cracking a major encryption algorithm could mean that an AI algorithm has bootstrapped itself to a level of mathematical understanding far beyond the best human mathematicians so cool and now here on Reddit we have a redacted leaked letter from inside of open AI now we have no proof that this is real but I'm going to read it anyways cuz it's so interesting and this is about something called qualia which has to do with qar qualia has demonstrated an ability to statistically significantly improve the way in which it selects its optimal action selected policies in different deep Q networks exhibiting metacognition metacognition what does that mean that means it understands itself it understands why it's making certain decisions and it's not just outputting what it predicts to be the most likely next token it is actually understanding why it is doing so and that is a huge advancement in AI right now large language models are so complex they're pretty much a black box how a large language model gets from prompt a to answer B isn't always known and here it says via a cipher text only attack it provided a plain text from a given aes1 192 Cipher text which is an encrypted piece of text by using TOA analysis in a way we do not yet fully understand a claimed full pre-image vulnerability for the md5 cryptographic hash function with a theoretical computational complexity of 2 to the 42 bits and basically what all of this technical stuff is saying is that after training a model on an encryption algorithm you can actually just provide it an encrypted piece of text and it will decrypt it without actually knowing what the decryption key is and yeah that spells disaster for the world now let's take a look at this post by Yan l who is a leading AI researcher he works at meta he's a big proponent of Open Source I'm a big fan of his please ignore the Deluge of complete nonsense about qar one of the main challenges to improve llm reliability is to replace autor regressive token prediction with planning autor regressive token prediction is what I've been talking about the ability for a model to take a look at a series of tokens and just predict what might be the next token it doesn't actually have a true understanding and it doesn't actually plan in advance but what he's saying is for us to really reach AGI we're going to need large language models to actually be able to plan in the future pretty much every top lab is working on that and some have already published ideas and results it is likely that qar is open ai's attempt at planning and they pretty much hired gome Brown to work on that so he's saying that this isn't new but he's saying a lot of different research teams have been working on and that they may have just achieved it and here's another post by Yan llms produced their answers with with a fixed amount of computation per token there is no way for them to devote more potentially unlimited time and effort to solving difficult problems now this is a really interesting point that I had actually never thought about llms predict the next token in a series of token but they do so with a fixed amount of computational power but imagine if they just had more time to think imagine that they could sit there for minutes days weeks years and just compute and think about what the best possible response is that doesn't happen today but maybe it could this is very much akin to the human fast and subconscious system one decision process true reasoning and planning would allow the system to search for a solution using a potentially unlimited time for it this iterative inference process is more akin to the human deliberate and conscious system too meaning when you sit and think about a problem think about all the different permutations of outcomes that is system to thinking this is what allows humans and many animals to find new solutions to new problems in new situations so again everything in large language models today is really a derivative of its training set it's not actually coming up with new and novel ideas some AI systems have planning ability namely those that play games or control robots gam playing AI systems such as alphago Alpha zero liberatus which is poker and Cicero which is diplomacy have planning abilities these systems are still fairly Limited in their planning abilities compared to animals and humans here's where he inserts his own beliefs to have more General planning abilities an AI system would need to possess a world model a sub system that can predict the consequences of an action sequence given the state of the world at time T and an imagined action I could take what would be the set of plausible states of the world at time t+1 so basically I have an understanding of the world and if I perform this action what are all the set of outcomes from that action and again I start to think about simulation Theory at that point how to build and train such World models is still a largely unsolved problem so he still doesn't think it's out there yet and this sure does sound like it's AGI so we've touched on alphago a bit here but now let's talk more about self-improvement and this is another prediction of what qar could be a self-improvement so self-improvement as I mentioned before is this idea where in the world of gameplay can play that game with itself over and over again an unlimited amount of times and get better each time far better than just basing its training on a human data set so for example with Alpha go it had a huge amount of games that it could base its initial training set on but that would really only bring it to be as good as the best players in the world to go beyond that and to be able to beat them it needed to train on itself and actually iterate through all the different possibilities of go now Andre kpoy released an Incredible video if you want to learn all about large language models it's about an hour long and it is really an incredible breakdown if you don't know much about it but here's an important part about 38 minutes into the video where he actually talks about self-improvement let's give it a listen give it this idea of selfimprovement so I think a lot of people are broadly inspired by what happened with alphago so in alphago um this was a go playing program developed by deepmind and alphago actually had two major stages uh the first release of it did in the first stage you learn by imitating human expert players so you take lots of games that were played by humans uh you kind of like just filter to the games played by really good humans and you learn by imitation you're getting the neural network to just imitate really good players and this works and this gives you a pretty good um go playing program but it can't surpass human it's it's only as good as the best human that gives you the training data so deep mind figured out a way to actually surpass humans and the way this was done is by self-improvement now in the case of go this is a simple closed sandbox environment you have a game and you can play lots of games in the sandbox and you can have a very simple reward function which is just a winning the game so you can query this reward function that tells you if whatever you've done was good or bad did you win yes or no this is something that is available very cheap to evaluate and automatic and so because of that you can play millions and millions of games and Kind of Perfect the system just based on the probability of winning so there's no need to imitate you can go beyond human and that's in fact what the system ended up doing so here on the right we have the ELO rating and alphago took 40 days uh in this case uh to overcome some of the best human players by self-improvement so I think a lot of people are kind of interested what is the equivalent of this step number two for large language models because today we're only doing step one we are imitating humans there are as I mentioned there are human labelers writing out these answers and we're imitating their responses and we can have very good human labelers but fundamentally it would be hard to go above sort of human response accuracy if we only train on the humans so that's the big question what is the step two equivalent in the domain of open language modeling um and the the main challenge here is that there's a lack of a reward Criterion in the general case so because we are in a space of language everything is a lot more open and there's all these different types of tasks and fundamentally there's no like simple reward function you can access that just tells you if whatever you did whatever you sampled was good or bad there's no easy to evaluate fast Criterion or reward function um and so but it is the case that in narrow domains uh such a reward function could be um achievable and so I think it is possible that in narrow domains it will be possible to self-improve language models but it's kind of an open question I think in the field and a lot of people are thinking through it of how you could actually get some kind of a self-improvement in the general case okay so you can see Andre is talking about all the same things that we've been hearing about Q star might already be and this type of self- learning is mentioned in an article where where Demis hassabis is quoted Demis hassabis is Google Deep Mind CEO and says its next algorithm will Eclipse chat GPT and of course he's going to say this they're the competitor but it's important to keep in mind that this is the person who essentially invented alphao so he would know how to integrate it into large language models and so here's a quote from him at a high level you can think of Gemini as combining some of the strengths of alphago type systems with the amazing language capabilities of the large models we also have some new innovations that are going to be pretty interesting so he's really talking about using some of those Alpha go techniques to make large language models so much better than they already are and so if AI is now able to self-play and self- te and then the limitations of large language models are only limited by how much compute you can throw at it and this entire concept makes a lot of sense think about agents when you query a single large language model and let's say you ask it for some piece of code it's just going to give you one answer and one piece of code but now if you had a second agent checking the work which is what autogen does in really any agent framework then all of a sudden you get such better results because you have one agent who is checking the other agent's work and giving it feedback in real time and we've already seen how much better the results are from this and it's all the same self improving now let's talk about the last possibility for what qar is now again we've already touched on a lot of this but large language models quality and performance is almost entirely based on its base data set and data sets are becoming more and more difficult to come by I created a video about how Reddit is shutting down its API for this reason X shut down its API basically if you as a company have an incredibly valuable data set unique differentiated clean then that is insane valuable and not many companies have this and open aai doesn't have it they don't have their own data set they are going out and they're purchasing data sets from different companies they're using open source data sets but what if artificial intelligence could create its own synthetic data set then all of a sudden every single company is not reliant on these handful of companies that have these crazy big unique data sets like meta and Google and Reddit and X but it goes beyond that take a look at this post it's it's pretty obvious that synthetic data will provide the next trillion highquality training tokens I bet most serious llm groups know this the key question is how to sustain the quality and avoid plateauing too soon the bitter Lesson by Richard Su Sutton continues to guide AI development there's only two paradigms that scale indefinitely with compute learning and search it's true in 2019 at the time of writing true today and I bet it will hold true to the day we solve AGI now Elon Musk actually replied to this and and this I find incredibly interesting yeah it's a little sad that you can fit the text of every book ever written by humans on one hard drive sigh synthetic data will exceed that by a zillion so basically besides for the fact that only a handful of companies really control unique differentiated data sets even if you combine all of those data sets together it's not a lot it might seem like a lot to a human but to train AGI we're going to need orders of magnitude more data than that and the way to reach that is by synthetic data and a lot of people are saying this is what qar is actually doing it's able to create synthetic data now I'm not a huge believer in this but of course I'm just an amateur and I could be very wrong it's hard for me to believe that with a static data set that a model is trained on it's going to be able to come up with new ideas new data I just I'm not seeing it but again I could be wrong because just based on the way that Transformers works every token in the response is really just a derivative of the tokens that were in the training set so how could it actually come up with new ideas and if it were coming up with new ideas wouldn't that be AI in itself so what does all this mean we've talked about a lot of things today and I hope you've liked it all cuz I find it fascinating qar might actually be a combination of all of these different things so some new method of logic and reasoning with the ability to truly understand that logic and reasoning plus self-training which removes the need for humans in the process Plus plus the creation of synthetic data might be the Prelude to AGI whatever it is it has really scared a lot of people and the debate between Ai doomers and AI accelerationists continues on so what do you think it is let me know in the comments if you liked this video please consider giving a like And subscribe and I'll see you in the next one
Info
Channel: Matthew Berman
Views: 388,241
Rating: undefined out of 5
Keywords: agi, ai, q*, q *, q star, qstar, what is qstar, openai, sam altman, artificial intelligence, ai breakthrough, chatgpt, altman fired
Id: Z6E41eXStsU
Channel Id: undefined
Length: 35min 16sec (2116 seconds)
Published: Tue Nov 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.