GPT3: An Even Bigger Language Model - Computerphile

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
👍︎︎ 2 👤︎︎ u/chillinewman 📅︎︎ Jul 06 2020 🗫︎ replies
Captions
Rob welcome back to computer found these strange times that we find ourselves recording in you've got the green screen up there we're having a few laggy problems with the communications what are you going to talk about today then yeah I thought today it would make sense to talk about GPT three because before we had those videos about language models and transformers and GPT two people seem to like I think those came out quite well and now there's a new one so is it so ok what's that what's the deal eleven it is both bigger and better that's that's the headline so like the thing about the thing about GPT two is just that it was much bigger than anything that came before right it was more parameters and just a larger language model and that was kind of the point of that paper right the point it was trying to make is like people in natural language processing has spent a tremendously long time working on all of these clever things that you can do it getting in the nitty-gritty of the technical technical stuff for like detailed fine-tuning for these different benchmarks that they have and so on and GPG too was opening I just saying well what if we just made a really really huge one what would happen and it turns out that even without any fine-tuning by which I mean well I we talked about all of this before so I won't get into that detail but like that it that GPT to could do reasonably well on all of these different tasks even though it hadn't been trained on those tasks it was trained purely as a language model which means all it's trying to do is predict the next word or the next token given what its seen so far and so that was kind of an interesting finding that if you just took this like quite simple architecture the transformer is not that complicated and architecture and the data set is not like a fancy or sophistic it's not like a highly what am i saying it's not like a high effort structured data set is just like a giant pile of text you're trying to make sure it's good quality text but it's it's just unstructured just any text you could get from the internet and it's able to perform reasonably well sometimes state-of-the-art sometimes not on the all of these like specific benchmarks for different natural language processing tasks and that was very impressive and the thing I think I said this at the time is that the graphs were still curving upwards you have these like we're not curving upwards but they were straight they weren't cutting down so generally speaking you start to get diminishing returns right you can't just keep making the model bigger forever yeah that's the thing is like you usually would expect it to plateau but at the level of GPD to it was not so not only was this model bigger than anything that came before but also there was an indication that like with that just scaling up is like we haven't reached the limits of that yet and that was kind of surprising because yeah because you would generally expect these things to plateau right you would expect to hit the limits of the data like the information contained in the data the data set size but also maybe the the training regime maybe the the whole approach right what's the biggest airplane you can build it's pretty big but they're like comes a point where the aeroplane approach doesn't work anymore and if you want to go higher you need a rocket right you need like a whole different approach you can't just scale up what you have what they found with g52 was not only could you make the model much bigger and it continued to get better but also the rate at which you continue to get better is still pretty much a straight line with the scaling of smaller models so since that time various people have got on the bigger language models trained and tried making new things you know bigger and bigger language models and they keep doing better and better kind of according to what you would expect and then for GPT 3 what open hi has done has come along and said essentially the same thing they said for GPT 2 which is ok but what if we made a bigger one and everybody's like what we did we did make a bigger one it's like no what if we made like a bigger no somebody's plotted this graph and then there's somebody there at the back of the room going that's still going up there yeah right like how far can we ride this thing let's find out so so you remember the the GPT two they released the hundred and seventeen million parameter model and they didn't release the larger models immediately right because there was some concerns about possible misuse and over time they steadily released larger and larger models the largest DBT to model was 1.5 billion parameters so GPT three is a hundred and seventy five billion parameters well okay yeah you need a lot of a lot of compute and a lot of money to run it so so yeah they did have to do some clever engineering because like gbt - you can put it on a single machine at inference time whereas i don't think you can do that with GPT three I think you need a sort of a cluster to run it but yeah so the big finding with this giant model which which is about ten times bigger it's a hundred and seventeen times bigger than GPT - and about ten times bigger than the previous biggest thing which was chewing n LG watch it and what they find is when they look at the graphs they're still going up right we can we could we could still go bigger and it does look like it would continue to get better so how good is it some of the main takeaways are when you have it write an article and you ask human beings to differentiate articles written by gbg3 from articles written by humans they get it right about 52% of the time I'm looking at the table here it says human accuracy in identifying whether short news articles are model generated these are articles of about 200 words and basically they tried generating with all of the different sizes so GPT 3 small medium and large on this are I think equivalent sizes to the GPT 2 ones and then you can see how the accuracy with which humans are able to identify just steadily goes down basically the small model they are 76 percent of the time able to tell if correctly if it's human or AI and then just steadily drops down until you get to the one hundred and seventy five billion parameter model whether at 52 percent what I thought was it would be fun to run a little experiment with everybody at home because they had the thing generate some poems and there are samples in the paper the way that you get this model to produce things is you give it some text and then you say and now it's your turn to continue from here so they gave it something which kind of looks like it's from a compendium of poetry so it has you know the title of the poem by this person and then the full text of the poem and then the title of another poem so what I thought it would do because we know this the poem we know the poet that GPG 3 is trying to imitate that I could try reading like randomly picking one of wallace stevens as actual poems and randomly picking one of these I think these aren't cherry picked either yeah I'm curated completions yeah so and then we'll see so I'm going to randomly pick one of each and then I'm gonna randomly decide which one I'll read first so you don't get any clues okay so this pound are both by Wallace Stevens this first poem is called shadows on the way I must have shadows on the way if I am to walk I must have each step taken slowly and alone to have it ready-made and I must think in lines of gray to have dim thoughts to be my guide must look on blue and green and never let my eye forget the color is my friend and purple must surround me too the yellow of the Sun is no more intrusive than the bluish snow that falls on all of us I must have gray thoughts and blue thoughts walk with me if I am to go away at all that's one pun the other one is titled fab leo of Florida bark of phosphor on the balmy beach move outward into heaven into the alabaster's and night blues foam and cloud are one sultry moon monsters are dissolving fill your black hull with white moonlight there will never be an end to this droning of the surf everybody place your bets the problem is people who know poetry really well it would be well placed to decide you know which of these they prefer or whatever the chances are they'll they'll know the originals so it's hard to get a fair test without magical on Google I have no idea which is which I mean I don't know should we reveal it here one can be used Philo she let people have a think about it or should we say at the end yeah maybe at the end of the video oh she's one thing and at the risk of offending some poetry fans it can be thought of as kind of ethereal and maybe not so grounded in fact and therefore it's okay to predict that sort of stuff and to emulate a poet but what about things like scientific papers if you fail it enough science enough scientific papers do you think could it come up with something that we've not really realized before or or something new yeah so my instinct is to say no it's just predicting the next word right it's just a language model it doesn't have the ability to build the kind of abstract mental structures that you need in order to actually kind of synthesize new knowledge but there's a kind of an outside view that says that we thought that about a bunch of things that it's now seems to be doing so I'm not going to say that it definitely couldn't do that so so one example of a task which it got better at tremendously better at is arithmetic which is kind of an interesting task because again it's a language model it's not trying to do arithmetic it's not designed to do arithmetic but so with GPT 2 if you put in 2 plus 2 equals and get it to give you the next token it will give you a 4 but that's not very surprising like that's not very impressive because you would expect in its data set to see that string 2 plus 2 followed by the string for very many times that's pure memorization right the thing doesn't have to have any understanding of what letters or what numbers are at all it can just see the secrets of tokens give you the next one and then the problem gets harder and harder the more like 23 plus 48 or something that's more difficult because it's less likely that that specific string has appeared in the in the training data set right so this gets more and more difficult and more and more like actual reasoning you couldn't see me doing big hair quotes there the longer the numbers get right if you can reliably add 10 digit numbers together then it's hard to deny that what you're really doing hasn't you have to really be doing addition right you know there's no way you could memorize to that yeah but it's kind of interesting because GPT 3 does way better but it can't add ten digit numbers so let me find her let me find the graph because they graphed this so it starts to run out stealing effectively right was a humanist yep so what I'm looking at now is a graph of performance on a bunch of different arithmetic tasks and you can see that just going up to like two digit addition GPT to does pretty poorly so the 1.3 billion parameter model which I guess is the closest equivalent better than chance but not much at all so the thing is so to do to addition and three-digit addition are things which like by the time you're at three digit addition you're not going to be memorizing from the data set because firstly I think that the cleaning of the data set made some attempt to remove if there was something that was just like a giant table of the times tables or something like that I think they tried to remove that from the data set and secondly if you're if you're doing three-digit addition that's a million different possible problems right it's like quite a lot of network capacity to do by memorization people learn multiplication tables and this is like apparently the most effective way of teaching something that works like a human brain and then you have some procedural rules for taking you like you memorize the three plus three is six and then you have these procedural rules about like carrying and and those kinds of things to do larger additions and then you iteratively like systematically apply that but yeah the larger the numbers get the harder it is to memorize and they actually ran an analysis so they searched for the addition problems they searched for 2,000 of them just looking through the whole data set does 38 plus 49 exist anywhere in this data set and they found 17 matches writes 0.85 percent of the of these problems occurred in the database but GPG threes performance on two-digit addition is extremely good it's basically 100% of the time to do to subtraction only slightly worse and then three digit addition and subtraction again it's getting like 80 percent 90 percent and it's a big jump from the smaller models what they're kind of suggesting in the paper is that it has actually learned how to learn like that's what that's that that's the the interpretation of this that they're pushing that the the in order for in order to perform sufficiently well at this language modeling task the best thing to do is to actually while looking at the context learn specific rules for how the context behaves so that you can continue it more accurately okay yeah and so I have an example I have like a way of thinking about this which is not that tight in analogy but I think it might be helpful yeah which is okay so suppose you've got your doing like sim toril you have a robotics task you're training an agent to do some thing with a robot right running things on the physical robot is super slow so you're using a simulation but you have a problem which is that the simulation is not exactly the same as reality you have this physics model that you've built that is supposed to simulate exactly you have the robot and the environment at the robot so that it can learn in the simulation and transfer it in practice that doesn't work very well because it's really hard to know you always have some uncertainty about just little variables you know how much like each part of the robot weighs or whatever because you built it but like what's the what's the coefficient of friction on the ground at the spot where the robot is right now like you have to estimate right and it might not be right and so if you train something if you take your best guess put it in and you train a system it might find a policy which is a policy of like doing some kind of leap that that's the best way to achieve whatever it is you've told it to do like get from here to there quickly or something and then you have a problem because then if they if the it's relying on the current efficient coefficient of friction being this specific thing and then you run it on the real robot and the thing completely falls over because the it's out of the distribution it was trained on so one thing that people do is when they're simulating it they randomly vary it right you say we think that the coefficient of friction here is around this number but we're actually going to give it every time every like episode we're going to give it a random value somewhere in the range from like 0.9 of that by one of that you know and then the the machine learning system is going to learn a policy that's able to handle any correlation of friction within that range she was learning to adapt right right well that's the thing so there's two different things that could happen here one is the model could learn oh if I do this kind of leaping thing then some of the time I completely stack it and it's very embarrassing so I'm going to just do like a shuffling thing right that's like much more reliable that works across a wide range of friction values right that's one thing you could do but there you've kind of sacrificed some performance right yeah but if your model is more is more sophisticated it could learn something like okay first just slide your foot along the ground a bit to get a feel for what the friction is like yeah and then if it's correct do the leap otherwise do something else or like adjust how you're leaping so that it always works exactly and nothing necessarily changed for that except the power of the model right if your model is too small then it's not going to be able to learn something as complicated as measure the friction and then do one of these five possible things depending on the friction right it's only going to be able to learn one thing that it could do and so it has to just learn one that does okay on all friction hose but if you have a larger model you can learn a better policy which actually adapts I don't know I like this is this is purely I'm not like talking about a specific paper or anything this is just a thing that I thought of and so what they're suggesting I think in this paper is that gbg3 is doing something similar that in order to perform really well at this language modeling task it's actually learning online from the context what the task is that needs to be done gradually I mean I think it's like it's a step on the path it's not like us it's not dramatically closer than we would expect or anything like that what they're interested in mostly in this paper is gbg3 as a few short learner which is so you have the standard machine learning model is you give the thing loads and loads of data right the more data points the better but sometimes you have a few short learning problem which is where you have to learn using only a few examples so let's say for example your phone you want to unlock your phone right you can train a model that does all kinds of face recognition and stuff but in this case you want to train a classifier to distinguish this face from other faces but it's only gonna get you know you're going to give it like three pictures of yourself whereas usually for a classify you would want to be giving them thousands so that's like a few short learning problem and and so you can kind of you can kind of imagine this is all kind of fuzzy because it's like you can think of it as when you're giving the thing the context you can give it examples and how many examples you give it is a bit like just giving training samples to a machine learning system but what's impressive is that gbg3 seems to be able to do these tasks with what would be very very few examples compared to standard machine learning methods right the thing that's kind of interesting is when you look at we can stick with arithmetic stick with addition the number of examples that you give it makes a big difference if you just say what's you know this number plus this number equals it will get that right a certain percentage of the time but if you say you know 10 plus 10 equals 20 you know 25 plus 30 equals 55 you know and give it a few of those then the performance gets much better there's various different ways that you could interpret this easy it actually learning how to do addition from looking at your examples or is it just figuring out that what you want is addition is it is it is it learning addition or is it like locating addition in the space of possible tasks that it already knows how to do kind of unclear for all pretty much every task they try it in the zero shot the one shot and the few shot settings and they they look at how well it performs and it consistently does you know better the more examples you give it up to the size of the context obviously you can't I think you can only give it 2048 tokens which actually is a very large and I like that's much bigger than most other language models out there but that they find it does better it does better than where you give it but also the ratio seems to go up the larger the language model is right so so all of the models do better with more examples yes but the difference between the zero shot and a few shot is bigger for the bigger models so it suggests that perhaps the larger models are actually like making better use of that information in the context to actually learn things yes right right it's less about just like using the context to find the relevant parts of the training data to sort of parrot back that you've memorized and more about actually learning what to do from the context like recognizing that what's happening is addition and then actually doing addition yeah because you have to have some way to account for the fact that this thing is reliably doing addition when only a very small number of those addition problems actually occurred in the training dataset yes the other thing that's interesting about it is apparently when it gets them wrong it tends to get them wrong in sort of human plausible ways okay yeah exactly exactly and that is another indication that what it's here is is something like actual actual addition but that's pretty exciting and there's a sense in which you know it you could imagine it learning the rules of addition sort of in the same way and that it learns the rules of grammar in order to change something into a question then you swap these two words around and add a question mark or whatever it is you know in order to do addition then you pull out the thing that you've memorized for these two and add it and then do it again to this one or whatever and that is the kind of thing that you that I most people I didn't expect a transformer trained in this very straightforward way to be able to learn and the big takeaway is we've gone we've gone 117 times bigger than GPT 2 and GPT two they started doing it because the curves are leveling off they're still not leveling off so yeah we don't know how far we can push this type of language modeling by a little bit further yet at least in this case the first one was gbg3 and the second one was so the bluish snow and etc etc that was the only thing I was thinking about never so thinking one no it's kind of bluish white great well and it also you know it's cell phone under a blue sky it's on
Info
Channel: Computerphile
Views: 291,329
Rating: 4.9490175 out of 5
Keywords: computers, computerphile, computer, science, Rob Miles, AI, Language Models, OpenAI, Machine Learning
Id: _8yVOC4ciXc
Channel Id: undefined
Length: 25min 56sec (1556 seconds)
Published: Wed Jul 01 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.