Answer complex questions from an arbitrarily large set of documents with vector search and GPT-3

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
👍︎︎ 1 👤︎︎ u/[deleted] 📅︎︎ Jan 19 2023 🗫︎ replies
Captions
i'm back okay so i know that i said i would take like two to four months off um but apparently something has changed in my brain and i'm not going to take that long anymore um i have continued reading brain trust and as what always happens whenever i read cognition or neuroscience i'm inspired to do more work so i want to bring you up to speed with what i worked on last night and i didn't share it because one it is a politically sensitive thing i did check openai's guidelines for content sharing and publication and you're not supposed to share anything that is part of a political campaign so i think this is okay um but basically what i did was i took the um i took the supreme court opinion on um dobbs versus jackson which is more conventionally known as overturning roe versus wade um so it is crazy long it's 454 000 characters long and i ran it through my recursive summarizer and after four iterations i got down to the supreme court has well here let me just go here um the supreme court has overturned roe versus wade which means that states are now able to ban abortion this will have a particularly hard impact on low-income women who will not be able to afford to travel to states where abortion is still legal many will be forced to turn to illegal and unsafe abortions which could lead to their death so that was the ultimate result of recursively summarizing this document which is a very impactful summary um it occurred to me though that just recursively summarizing something from an arbitrary length down to something super concise okay that's great but you lose a lot of resolution and then um there is a huge need for answering questions from arbitrary volumes of data this is an unsolved problem and it is a non-trivial problem so what do i mean by answering questions from arbitrary data sources or an arbitrary number of documents whether you are a business or whether you're building artificial cognitive entities or chat bot assistants you're going to have a huge amount of data to filter through oh and actually before we get started i just wanted to go ahead and plug my um my discord server that i just started up um the join link will be in the comments this is a really smart bunch of people who are all doing really great stuff so if you want to join my my research discord please feel free to jump in make sure you check out the rules first though there are only four rules keep it cool be kind discussion not debate agree to disagree and beliefs and evidence um other than that pretty much everything goes um we want it to be chill and friendly and productive this is not a place to prove that you're right to prove that other people are wrong it's not a place to show off or anything like that we are here to make the biggest difference possible for the world okay now that that plug is out of the way um multi-document answering so openai originally had their answers endpoint which you could have an arbitrary number of documents and then like it would search for the right document and give you an answer they deprecated that because nobody used it but it occurred to me that maybe there is something here because um say for instance you've got a robot right like you imagine that you've got a domestic robot um and you want it to like keep track of like oh hey what did i tell you a year ago right or or you you have a business assistant that that you want to be able to have intuitive discussions with managing large amounts of knowledge of memories is going to be critical for this um and if you if you just summarize an arbitrarily large amount of data okay that's fine but you lose a lot of information and you can't interact with it so this is going to be one of my you know pair programming sessions you always tell me that you love watching me just take an initial stab at something so let me just tell you or show you where we're starting so one i'm going to borrow some code from my recursive summarizer but also i'm going to borrow some code from my acog experiment because one of the things that i did here is i've got this function that allows you to stack memories and also embed memories so i'll be borrowing some code from this as well so we'll start here both of these are publicly available under the mit license so you're welcome to uh to play along if you want um so without further ado let's go to my multi-document answering so i i've already borrowed this code to recursively summarize something so basically what i'm going to do is because we've got a good example to start with i'll go ahead and just grab this document and we'll start here so we'll take that from the recursive summarizer because let me just show you this is it's 454 thousand characters long and it is dense it has a lot of information um so this is a supreme court opinion and in order to have a uh in order to allow people to better engage with you know political discourse or other information problems wouldn't it be great if you had a really powerful chat bot that could answer questions for you and summarize things and like you know tell me what this is all about so that's what i'm going to try and do and obviously this is a non-trivial problem i do not expect to finish it today but we'll see how far we get okay so recursively summarize first things first um we're not going to just summarize everything so we get we're going to have to throw out some of this let's go ahead and rename this to input um input.txt that's fine no all right so it all text equals open file input chunks equals text wrap so what this does is it breaks it up into chunks of 3000 characters um so what this first one is going to do is where my my intuition is that what we what we should do is go ahead and make an index and rather than make like an inverted index that you'd use like in store in a database or whatever we're going to do a vector-based index um so let me show you uh let's see where did it go my acog experiment so the inner loop um so what i've done here is we get we get embeddings from open ai and this one i did with ada we'll probably do babbage or something well no just for search i think eight is probably fine so we'll copy this function um say all right we'll get we'll grab a gpt3 embedding we'll also go ahead and grab the similarity function because it's just a real simple numpy where you do you give the dot product of two vectors and then the dot product gives you how similar those two vectors are it's super simple um okay search index let's see oh yeah so this is this was the function that i wrote to actually search it um so we will copy this function as well um so basically what this function does is you give it a text that you're going to try and match and then you pass it the index which is in this case it's it's a what was it let me make sure i'm remembering this correctly so search index um the nexus index update index yes um and update index how was that made so we list all the files and then we okay yeah so this is all it is it's a list of uh dictionaries where you've got a file name and a vector so that's it um okay so we'll probably do it a little bit differently where i'll just have the whole thing in memory and instead of having a file name i'll just have the chunk of text so it'll all be in memory um actually no it should be it should be on a file so that way it can be saved okay so the first thing we're gonna have to do here is build our index so let me also just go ahead and copy the update index file um or not file function sorry okay so and we'll rename this to just build index i had i had this function as an update index because this was supposed to be for an artificial cognitive entity which means it has to index its memories as they're accumulated so that's an even more complicated problem okay still following along good okay so we've got chunks text wraps so we've got all the chunks the result there we go that's fine um open file we don't need to do this uh we don't need we don't need we don't need to run it through a prompt all we need to do is get the um get the embedding now embeddings if i remember correctly you can get multiple embeddings at once i think open a open ai embeddings um let's see take a look at it input simple sample document goes here because if we've got like 150 different you know chunks to um to to do all right i'm not seeing anything that does multiple embeddings i'm not going to worry about it right now we'll just do one embedding at a time that's fine um and actually i don't even think we'll need the search in this one so i can probably take that out okay so we'll comment this out for chunk in chunks we actually just need the embedding so we'll say gpt3 embedding so content yes embedding equals gpg3 embedding and then we'll do chunk dot encode well here we'll just copy this so what i had to do here um this particular bit um this i found that sometimes there are unicode characters that gpt3 cannot handle and it errors out and so what i found what i started doing is just adding this little bit of um code to to change it so you're basically encoding it from unicode to ascii which is simpler and then you decode it back into just a regular string variable and that seems to prevent any gpt3 problems um okay so we got the embedding and we don't need a summary because essentially an embedding is a type of summary um it's just it's a numerical summary okay so what we're going to do then is instead like what we did here where it's it's just alluding to a file name because with an acog here let me just show you um with an artificial cognitive entity you might have um a list of where did it go did i delete all the memories i might have deleted all the memories or no i haven't gotten started yet so basically what you do is you have a log so all the memories all the experiences of an acog is just going to be like a list of log files and they could be multimodal files right you could have audio vid video text whatever other sensory information it's got you can also have output information but the point is it'll all be accumulated there and then you can represent it as a vector which is a way of of representing the semantic meaning of it okay so we can get rid of that close that all right so we'll say uh content equals uh chunk so that'll be just the the bit of text and then we will say the um the vector equals the embedding um i prefer the word vector because one it's two syllables and it's easier to say you got vector embedding is too slow and i don't know maybe maybe that's just me being weird but my brain i prefer the word vector it's easier to say okay so then what we'll do is we will we will save this as a json file i think json so that way it'll be it'll be readable um yeah i think that'll be the way to go um let me make sure that we've got um let's see import json and i always have to do this um because i always i can't ever remember like with open um we're going to say index.json um right binary i think that's it encoding equals utf-8 um uh where was the last time that i used this function where i saved json it's not here it's probably going to be something older where did i save json um it might have been here do i have a json file here i have json l that might be close enough format training data import json okay yeah that's fine okay oh wait no that's dumping it to string so i'll need json.dumps i think that's the one that dumps it to file [Music] um as out file uh let's see jace i literally typed out jason json.dumps and then i believe it'll be uh result out file and then what is it like indent equals two i'm totally misremembering this um need to make sure that i get this right though because nothing is is worse than like um really i don't believe that code there we go hey there we go return json dumps now that's a flask response that can't be right there we go json. that's all writing it as json l i can't be right yeah there we go json.dump data out file indent one okay hey i've almost remembered it right not quite but almost um we'll do we'll leave it indent too that's fine okay because nothing is worse than oh and not right binary this is just text um yeah and actually i think maybe maybe um let's say i don't think i need to do the separators that's fine and it's always as right and not encoded as utf-8 okay so let me remove the encoding as utf-8 okay we'll leave it at that that should work um okay so alt text we open it we get the embedding we append this to the result so the result will be a list of dictionaries with some text and then a vector and so that'll be our database um yeah and we don't even need to build the index because we're building it now so let me just go ahead and delete this function because that's noise search index we can delete that because that's noise we can delete similarity because we don't even use that we don't use completion do i use save file i don't use save file anywhere all i use is open file okay um we don't need that i think that's about it i don't need re i think we just use text wrap json i don't even use os anymore clean this up okay that should be fine um let's go to multi-document answering and we'll do we'll just call this build the index there we go keep it in editor no we'll just reopen it okay so this will generate a json file that will have it'll be a list of chunks of text that are 3000 characters long each of them with an embedding we can probably do more do it a little bit longer um let's do 4 000 characters because it's roughly three or four characters per token so this will be roughly a thousand tokens which will be a quarter of well hmm i wonder if we can do longer no this this would be fine because we we still need some room to work around it if we have one of these chunks this is this will be our knowledge base um so let me make sure this will work i'll let it run cd multi-document answering python build index did i get it right the first time no save gpt3 log is not defined ah see that's what i did wrong i'm not gonna worry about saving every single log um this is a normal list that's fine okay let's try again and then oh i don't have any output do i yeah i need i need i need some kind of output so that i can see what it's doing because otherwise i'm gonna be confused um so what i often do for things like this just to make just to do a sanity check is i'll set it to a variable so then i can just print the variable and then we'll do comma and then newline newline newline um so i can so i can see what it's what it's actually generating because sometimes if you do it wrong you know your brain just is not working with you today well my mind is working with me today sometimes it isn't um but sometimes your brain just isn't working and the variable isn't what you thought it was but in this case i think i think it's all right because this is relatively straightforward okay python build index oh wow that's fast that's really fast the embeddings endpoint is quick dang oh i wonder if it's because i'm using the ada endpoint yeah i bet that's why it's so fast gum is it already done index.json okay so we've taken an input from 440 kilobytes and made it into um more than three and a half megabytes oh perfect so there we go look it worked it worked it worked okay so this is what it looks like now um let me zoom out a little bit um because you don't need to see it in detail so you've got content which is just the tongue the chunk of text there we go i can speak and then a semantic vector so the semantic vector is the mathematical the numerical representation of this meaning so it's basically this is this is a pair right it's this is the human readable text and then this is the machine readable representation and so then we've got a whole bunch of those and this file could be compressed but what i did was um when i put the the indent here um to say indent equals two so that makes it more more human friendly right so you can see that you know the data is structured so like for every layer of embedding there's two spaces so the um the the root list is it index zero and then we've got um two spaces for the first um dictionary and then we've got another two spaces for the for the the nested list so you can clearly see the levels of nesting um okay so we've got our index that was much faster than i thought it would be let's zoom back in wow i was hoping that i'd have like a mental break to be able to keep thinking okay so i'm gonna actually pause this for a second just because you don't need to see me like gathering my thoughts typically when i run these loops i have like a few minutes to gather my thoughts so i'm gonna pause the video for just a second and we'll be right back as i mentally plan the next step okay and we're back um as with all things data prep is the biggest uh thing biggest problem so we're closer to being done than than you might have guessed um i started on the next part so we've we've built the index it was way faster than i thought now we're going to answer questions and so what i did was i just created wrote a quick thing to open the open the index that we just created and then we'll do an infinite loop um where we will just ask questions so um this is based on the artificial cognitive entity thing where it's uh basically just searching for um for a particular uh set of memories right um this same paradigm should work anywhere so what we're going to do is we're going to um we're going to take whatever our question is and we're going to get the vector from it um and so we'll just get a a vector and then we'll match whichever all the thing all the parts that are closest and actually here let's go ahead and just do a separate function so def and then um we will call let's just copy this function because it's it's pretty close to what we need so instead what we'll do is we will do let's see results equals search index and then we'll do query um so count we'll say 10 we'll say top 10. we don't have time stamp so we'll get rid of that so vector equals gpd3 embedding the text so that's the query scores okay so for i in nexus index um so we'll actually call this data we'll just replace that with data all right so for i and data if i equals vector this is identical skip it we don't need to worry about that because um we don't have we're not worried about sequential memories in this this is not a robot or anything like that so score equals similarity between our our query and the actual um what we're looking for okay and the vector is the right name okay uh we don't need file name so this will change this to content content oops um yeah so basically what we'll do is we'll just create a similarity score for all of them ordered equals all right so content and score so after this because once we get the score we don't care about the vector anymore we just say okay we're going to create a new um we're going to create a new list that'll be the same length as our database but we're going to sort it by um which whichever one is closest um let's see we don't need that because we already have the content and we're just going to assume that because we know our database is longer than 10. um so basically why i'm saying 10 is we're just going to say okay so if we have 10 chunks that are 4 000 characters long that's 40 000 characters which is roughly 10 000 tokens and so we know that we have to solve the problem of what if we have even after searching we have a larger corpus then we can feed into gpd3 in one go how do we handle that so with that in mind um what we're going to do is we get the results so this is a much compressed search where we're going to end up with 10 results that that could answer the question actually let's make this a little bit more challenging do 20. um so we'll have the top 20 chunks of text that should answer our um our question now the longer your question is the more specific your question is the better the search is going to be okay so results equal search index query and then we pass along the data which the data is our master index so that's this here which has the entirety of the supreme court opinion um paired down into vectors okay so with that we then have to ask actually ask or answer questions so this is where we get into prompt engineering so let me go to playground and we'll go grab um let's just grab an arbitrary chunk of text yes login that's fine all right so this is where um uh let's see answer the following question um let's see answer the question uh the blah blah from the passage um no use the following passage to answer the question okay so question and we'll figure that out in a second um actually no we'll do it in passage um actually no here we'll do question first so that you knows what the question and then passage and then answer okay so the passage is this so this is probably we'll do all right so let's think of a question that would um two cases arrived within the word balance scare quotes um the majority is a dirty word moderation is a foreign concept the majority would allow the states to ban abortion because it does not think um forced childbirth what equates to quality and freedom um okay so the question will be why did the courts um uh decide um to uh allow states to ban abortion all right so that this is a question that is partially answered by this this thing um so then let's see how it just answers this is just right off the cuff and so you see oh so remember i said these are 4 000 character chunks and you see it's right at 1 000 tokens so that's 8 cents could be cheaper could be more expensive whatever answer did it give up oh there we go according to the passage the courts decided to allow states to ban abortion because they believe a woman's freedom and equality are not involved in the decision to bear a child ouch okay um yeah um because we're because we're making it concise let's add um use the following passage to um give a detailed answer to the question um so we'll say detailed answer because here's the thing we're going to basically recursively answer this several times and so we'll consolidate it down so let's see if this is any better or different oh this is good okay wow yikes shots fired okay i like this better so we'll use this as our prompt um because again the idea here is not to summarize it as concisely as possible the idea is to extract information from a much larger document and pare it down and because we're gonna we're gonna basically take the top 20 we're going to need to pare it down a few times so this answer was about 200 tokens so if we take 20 times 200 that's 4 000 tokens so that's still going to be like a full thing and that's that's assuming that it doesn't actually make it much longer because some of the answers might be might be longer okay so we'll do this this will be our question answering prompt um okay so passage we'll remove this so we'll do passage detailed answer okay and so this will be um prompt answer okay so this is our this is our first tier first level of of answering and so what we'll do then is for all 20 of those top results we'll ask the same question and then accumulate those answers together and kind of summarize them all together to kind of merge it into a single thing okay so here's how we're going to do that we basically will borrow the recursive summarization um thing that i've done before um so let's open this recursively summarize yes yay fine um and actually i think we will need the prompt here as well yeah write a concise summary um yeah yeah yeah yeah okay so that's that and then we'll also need the gpt3 completion so we will need this um so let's grab the completion okay so with those results so four for result in results um then we need actually here um answers equals list prompt equals open file this will be was it prompt answer dot text dot replace passage with um and that'll be result dot uh content yeah yeah okay so that should give us the answer and then the answer equals gpthree completion prompt um and since this is already run through i don't think we need to do the thing that i did in the index because we've already cleaned up the chunks right we've already cleaned it up so it should be encoded in a way that is friendly with um with gpt3 um okay so then we get an answer to the question and so then we do answers dot append answer so that should be fine um let's do print answer and we'll do um new line new line um just to give a little bit of vertical space so it's clear so it will see that it's it's accumulating the answers and then so this this will this this object here this list will have all of the answers because we're basically asking the same question but we're going to be asking it of different chunks of text and so it's like okay how do we get this nebulous combination of things together so then what we'll do is we will do we'll borrow the same exact thing that we did for the um for the summarization right and so we'll we'll take um we'll take text wrap so we'll make sure that we've got text wrap here so let's go back to recursively summarize so we'll do import whoops no do not delete that import text wrap and since the da vinci instruct has a token limit of 4 000 um [Music] so 4 000 tokens times four characters that's sixteen thousand characters so we can get a pretty big chunk of text um so we will do we'll do a text wrap of ten thousand characters um okay so i'll do uh answer the same question for all returned chunks and then we will do summarize the answers together and so here we say um whoops that's already here we are taking um here let me do a time check real quick because we're close enough to the end yeah this will be a longer video but we're closer to the end this is going way better than i thought knock on wood that this actually works um okay so summarize the answers together right that's what i was doing kind of lost my train of thought there for a second um okay so for uh no we need to join it all into one chunk okay so all answers equals what is it um dot join um answers i think that's how that works let me do a quick python l equals um we'll do bacon bacon and burger and then we'll do um align new line dot join l oh right s equals print print print as okay yes that is right make sure i get the syntax correct okay so basically what we're doing is we're joining all the answers together into one big block so regardless of how long it is um we can we can say okay let's take all these answers and then kind of squish them together um i'll probably only do one pass but you would technically want to do this multiple times so chunks equals text wrap make sure i use this correctly text wrap dot wrap um and so we'll do all answers and weighs i said 10 000 not twenty one thousand ten thousand that's the correct number of zeros i think four zeros yeah um okay so then for chunk in chunks uh we also need a uh we'll say final equals list so for chunk and chunks we are just going to summarize it all together um so we'll borrow this prompt instead of a concise summary we'll do detailed summary because we just we're just going to take all the different answers and kind of merge it and merge it into one so let's go ahead and save this detailed summary detailed summary and we'll save this under excuse me multi-document answering and we'll do prompt summary.text okay so basically what we're going to do is we're going to take for each of the chunks of the answers and you can do this recursively right until you get it into one thing we're going to do the same thing here so prompt equals open file prompt instead of answer we'll do prompt summary and we'll replace um so the thing to summarize result content that should be good and then we will do summary equals gpt3 completion uh prompt and then we will do uh final uh dot append summary okay so that should be that and this one does save it out to gpd3 logs let me make sure that that is there gpd3 logs um okay so then once it's done we will do print um we'll do new line new line and then we'll do actually here we can just go ahead and do that to give us a little bit of white space and then we'll do um final dot no i will do new line new line dot join final there we go so it'll actually look like a a final answer we'll see if this works might not may or may not um we'll have it output the first one i'll just have it output outputting each answer as it goes so that we can see it wow why am i nervous this is this is this is crazy um am i missing anything send it python answer questions um why did the supreme court strike down roe v wade well it didn't like that okay something went wrong [Laughter] um it looks like it didn't return any um anything here so i probably did something wrong with the search okay print results and then we'll just do an exit here because that's where i think it's broken whoops answer questions why did the supreme court overturn roe v wade yeah okay so the search the search is broken um my intuition was correct okay so vector equals gpt3 embedding for the text so that's the query right so the query is there uh and then the scores so for i and data so that's a list make sure that i pass that correctly um okay let's print out score just to make sure that it's actually um why okay we're getting scores so that's correct and then reverse equals true results list oh i just declared an empty list well there's your problem um okay so we need we just whoops get rid of that and the ordered um and then so we do ordered zero tada okay so that should be correct i think i fixed it famous last words okay so let's class why did the supreme court overturn row v weight oh it's taking longer bueller oh ah see i needed to add in a few more things okay import re from time import time and sleep okay so basically what happened was it got to where it's trying to give me the answers and then it blew up because i forgot to import re or regex which is i used to clean it up and then i also didn't import sleep as well as time because i use time for the um for the file names okay let's try this again almost there answer questions your question is why why did the supreme pork overturn roe v wade oh i'm sorry that was probably loud okay now it's thinking okay so it's providing some answers cool cool cool it looks like it's giving some pretty satisfactory answers why did you freeze if it blows up i'll be sad now so this is kind of a boilerplate that was a long answer okay nope it didn't like that so i need to fix i do need to add in some of the uh darn okay okay so this this that's the bug that i was telling you about um error communicating carmack char map cannot encode character okay so we need to for all of the prompts we do need to add back in um this bit here just for whatever reason there are some there are some things that uh that it doesn't like getting in there all right so we'll come back here we'll do prompt equals prompt dot encode slash decode because this is just a reusable string so every time we talk to gpt3 we can run this prompt real quick and then you know actually even smarter we do it here do it once instead of multiple times okay maybe that's what i'll do i'll just have this in all of my gbt3 completion functions from now on let's do that and then we'll also do that in the embedding one so we'll do that here and instead of prompt we'll do content just to ensure that everything is encoded in a way that gpt3 will be happy with yeah okay so we've got the answers um and then we should be good here okay let's try it again enter your question here let's ask let's not give it such a softball question because we saw that it was answering so um what are the historical precedents that the supreme court looked at when determining whether or not to overturn roe v wade so this is this is a much more complex question um so we've got some key words in there like um egregiously wrong and cause significant negative consequences ouch okay so we'll see if this um less this more hardball question is uh gonna produce satisfactory results because that first question that i was asking um were were a little bit more um it was answered explicitly many times in the document but this one requires a little bit more um interpretation let's say and we see it's going through fundamental right that is deeply rooted in history okay sound basis and precedent states could not ban abortion as it violated a woman's right okay because basically i'm asking for like historical precedent so we'll see if it's good and it it it does look like it's talking about some previous decisions you know like 1973 that's somewhere in history um oh there's a long long response courts decided to allow states because they believed it is a woman's right to choose see that doesn't make any sense though allow states to ban abortion because it's a woman's right to choose well just if the states can ban it that doesn't make sense um okay because it looks like it's giving an explanation as to why the state's allowed it but it's not actually explaining the historical precedence so it it may or may not actually understand what i what i was asking while it's going through this let's look through the gpt3 log so this is why i love doing this so use the following passage to give a detailed answer to the question why wait did i give it the wrong it's still asking an older question hold on oops why did the courts decide to allow that's not the question that i put in was it is am i losing my mind yeah enter your question here what are the historical precedents that's what i'm looking for why didn't it ask that question hold on what have i done wrong here oh i know what i did wrong i didn't actually populate the question um yikes yeah so here's what i did wrong um if i look at the actual prompt so this is the the summary but um yeah i have the question hard-coded yeah okay so query you have to populate that i'm not losing my mind i just didn't actually populate the query because i get the query and then i ignore it okay so we will do this we'll just also do replace query with query and i think that's it yeah okay but first let's see so we we know we know what question that it that it did ask it asked why did the courts decide to allow states to ban abortion okay that's a softball question let's see what the final answer was that it gave us um okay the supreme court has overturned a lower court ruling that has struck down a mississippi law that would have banned abortion most abortions after 15 weeks of pregnancy the 5-4 decision with justice amy coney barrett joining the court's three other conservatives in the majority is a major victory for abortion opponents and a blow to abortion rights advocates the decision does not immediately impact abortion laws in other states but it paves the way for more restrictions on abortions to be enacted the majority of opinion written by justice clarence thomas states that the constitution does not protect a woman's right to an abortion thomas writes that the court's previous decisions on the matter roe v wade in planned parenthood versus casey were wrongly decided and should be overruled wow that is a really good answer to that question um yeah okay and dobbs versus jackson um health organization the supreme court considered a challenge to a mississippi law regulating abortion the law known as the gestational age act prohibit abortion after 15 weeks of pregnancy the court first noted that abortion is a matter of great social significance and moral substance and that laws regulating abortion are entitled to a strong presumption of validity the court then went on to explain that the mississippi legislature had identified legitimate interest in protecting the life of the unborn which provided a rational basis for the gestational age act the court therefore concluded that the act was constitutional and reversed the decision of the lower court the court also made clear that its decision should not be interpreted as endorsing any particular view of abortion but rather as affirming the right of each state to make its own laws on the matter that is a phenomenal set of answers wow okay so that these answers are i'm getting chills this set of answers is a wonderful wonderful example of an answer to the question that was asked okay so however that was a hard-coded question so let me i'm closing some things that i don't need answer questions about pi so we've got the index we can close all that okay so i've got the query is now going to be populated um so let's now okay cls okay so now that the now that the question should be properly um populated let's uh do this again python answer questions what historical precedence did the supreme court consider when deciding to overturn row v wade okay this is this is a harder question um and so then what we'll do is while that's running um let's see 220 so c it'll take a second okay question yeah there we go okay so it made it into the prompt and it says um the supreme court considered several historical precedents look at this it's working oh man okay important precedence course owned okay so it looks at those other precedents um if you read the actual thing which i don't expect anyone to it talked about like 30 odd things in in the past like going all the way back to common law in the 13th century in england um so let's see yeah plessy and brown versus board of education yeah this is doing pretty good okay so let's see if let's see if it if it picks up on the the common law um in in england um commerce clause yeah hang on i got a message okay new my phone sorry [Music] 14th amendment um supreme court several roe v wade roe v wade does not have the right to be sterilized without consent um so it looks like it's mostly keying in on to the initial one um and it's kind of saying the same things over and over again but we'll see we'll see in the final result brown versus board of education um also okay so it's picking up on some other ones um religious schools yeah establishment clause of the first amendment okay it is looking at at deeper history um yeah i'm i'm this is looking like a success um it it does look like it is it is okay final answer the supreme court has overturned a lower ruling um banning abortion okay cool the majority opinion written by justice brett kavanaugh um that seems i thought it was by clarence thomas um so it might be let's see but he says those decisions are not inexorable commands and the decisions should be overturned reasonably forex state interest okay so it's not actually talking about the historical things um the descent in dobsby jackson argues the long long history of abortion justifies the court decision and to recognize okay so it yeah okay i think something was lost let's look at the last few um uh inputs so let's grab okay so that one is one of these so write a detailed summary it looks like it got cut off interesting and it did it twice huh okay something went wrong when i was trying to get the summary at the very end let me pause this because oh well it's an hour long i know you guys are going to want to see this so i'm going to pause it and see if i can't figure out why the summary at the end went wrong okay i'm looking through this and it's not making any sense also you probably noticed a wardrobe change i realized that because i had been sitting outside at a coffee shop earlier i had pit stains and that was probably pretty gross and black hides that okay so i copied the the console output to a text file and it did pick up on what i one thing that i hoped it would which was um common law in history in england so it did see all that right so we got all these answers there should be 20 answers um you know griswold versus connecticut okay so it saw all those it got all kinds of stuff from all throughout history um but then when it got to the final answer it just gave me a summary um and some of it which did was not even in here so i'm like where did that come from okay so we've got we've got this we get all the answers right so the answers these are these are the answers they should be appended here so answers equals list answers all answers is join this right and then chunks equals text wrap a wrap all answers with 10 000 10 000 characters final list um prompt equals open file prompt summary summary result oh that's what i did wrong this right here darn it ah so simple so we got all the chunks but then i didn't pass the right chunks in this is why you test your code kids okay so for chunk and chunks there we go we summarize them all together so let's rerun this um yeah let me um i'll pause the video just because you don't need to sit and watch it rerun again um so but i will copy this this question and i'll rerun it and then i'll show you the final result and we should be done all right gang i think it worked um the final thing so there's there's oh here let me just copy this this out into a text file so enter your question here oops we'll grab this okay all right so reading through it enter your question here what historical precedence um [Music] it did get the common law so the pre the supreme court considered several historical precedents first they considered the pre-constitutional common law history in england which showed that abortion was largely prohibited in most american states as of 1868 so it got way back in history so that's good um that was one of the key things that i remember from my previous work that i was like wow this they went way back okay final answer the supreme court overturned roe v wade by considering several historical precedents the first president was the principal of stere decisis i think i'm saying that right which requires respect for the court's precedence and for the accumulated wisdom of the judges who have previously addressed the same issue the court found that roe was an egregiously wrong decision that had caused significant negative consequences and that overruling it would not unduly upset the legitimate reliance in upset legitimate reliance interest the second precedent was the history of stereo decisis in the court which establishes that a constitutional precedent may be overruled only when it is egregiously wrong and has caused significant negative consequences and overruling it would not unduly okay it's kind of repeating itself applying these factors the court concluded that roe met all three criteria and thus deserved to be overturned the third precedent was the fact that at the time of roe 30 states still prohibited abortion at all stages this showed that the road decision was out of step with public opinion at the time the fourth precedent was the fact that in the years prior to row about a third of the states had liberalized their laws this showed that there was a trend towards liberalization that was interrupted by roe the fifth and final precedent was the fact that roe abruptly ended the political process of liberal liberalizing abortion laws this made it clear that roe was not just wrong but egregiously wrong and that it needed to be overturned in order to allow the democratic process to continue that is a phenomenal explanation the supreme court overturns roe v wade taking into consideration several historical precedents this is the second chunk firstly they look at the president set by row itself was incorrectly decided erroneous historical narrative secondly they look at the president set by casey which revised the textual basis for the abortion uh right silently abandon rose historical uh erroneous historical narrative finally they look at the president set by janus and ramos which held that states cannot protect fetal life prior to viability all these factors led okay so the the final summarization needs a little bit of work but still in terms of giving you an answer as to like asking the question what were the historical precedents this was amazing i am super satisfied with this so i'll call it a day there thanks for watching
Info
Channel: David Shapiro ~ AI
Views: 21,592
Rating: undefined out of 5
Keywords: ai, artificial intelligence, python, agi, gpt3, gpt 3, gpt-3, artificial cognition, psychology, philosophy, neuroscience, cognitive neuroscience, futurism, humanity, ethics, alignment, control problem
Id: es8e4SEuvV0
Channel Id: undefined
Length: 63min 9sec (3789 seconds)
Published: Sat Jun 25 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.