Amazon bedrock - Langchain with AWS Bedrock #LangchainAWS #AWSBedrock #AnthropicClaudeV2

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello viewers welcome back to my channel this is John from cloud dech insights today's topic is Amazon bed drop and its interaction with the open source Library called L chain we'll be deep diving into how to invoke a model using Lang chain at the same time we'll also get into one of the features of Lang chain which is called chain of thoughts what it simply means is you are uh breaking down a complex prompt into three to four steps and getting a better results out of the llm model and at a high level we will be showing how lch interacts with bedrock model how you can integrate with uh bedrock and then how to produce results and we will create more videos explaining other features of L chain so before we get into the demo I want to say uh please like And subscribe channels this will really encourage me to create more videos some of the videos involve some cost Associated so I just want to make sure that you like and subscribe so that a lot of people can benefit out of this the video and before we start uh I mentioned some of these on demand interaction with the Bedrock model so please understand the cost associated with playing with the Bedrock model apart from that we can get started with the demo so in this demo I'm using cloud99 like I used in the previous one ensure that all the dependent libraries are installed especially Pho 3 Lang chain you can simply pip install Lang chain and any coders that I um show in this demo will be in my GitHub so you can visit my GitHub to get all the the code that I use in this demo so let's get started so you typically start with the um uh installation of the L chain so all you have to do is PIP install link chain since I've already installed it it'll say it's already satisfied but please go ahead and install in your local ID or ID of your choice the current Cloud9 is running on one two so that it it's compatible with most of the libraries keep that in mind let's get started so the Lang chain librar is installed and then um we can import some of the libraries into the IDE so this is how we uh import the L chain library and to understand this there's a lot of contribution from the AWS side where they contributed to the Bedrock how to interact with the Bedrock so there are a couple of libraries around Bedrock now and then it enables you to interact with the models from bedrock by specifying the client name or the model name that way it connects with the model and then you can start prompting and getting the results so this is just to um qu the libraries and then comes the creating the the llm client so this is where I uh mentioned so if you simply invoke uh the metrock class and give the ID so here I'm using anthropic Cloud V2 in this way the llm object llm will be created based on the model so all you have to do is invoke the llm and you will start getting the respon from the uh llm model which is hosted on AWS and then I would say uh we can have the prompt as well as the response how to parse the response so this is how we uh we create a prompt here the same prompt that I use in my previous video what is the largest continent and then response is captured using the llm do call you can call a specific method within the llm object and pass the prompt within it and you get the response back so this is a straight very straightforward fundamental like a hello world version of how Lang chain can interact with the Bedrock models so let's save this and hit run let's see there you go you get the response back without a problem so ensure that you understand how this works each and every prompt work there are many more methods Within llm or the Bedrock class explore that in the Lang chain website so this is a good option and now I just want to get into some of the good features that Lang chain offers so let's say If you think the response is not accurate or not up to the mark that you're looking for typically what we do is we uh modify the prompt and give like a good context to it in that way it gives you a better response and also we are telling the model what to do in case of such situation how do you want to uh answer this question how do you want to do the analysis so in this example I'm saying like hey to find the largest continent consider the size of each Contin so now now I'm telling the prompt how to consider and what to cons are these continents within that and then compare the land areas to determine which is largest and then you ask what is the largest content so you're giving lot of context to it in a like in a world where we did not have uh Lang chain we can give context in this fashion and then get the uh better all the better response from the L which is anthropic clot we2 so when I hit run so this is now CH and giving the context it's uh it's understanding how things work now if you look at the response it has a better response it has provided you with certain information and the final answers on how AIA is the largest and then square kilometers so you're giving more information and context to get a better answer this is called prompt engineering basically you can play with it and get U you can get better answers with the good prompts so this is one way uh that you can interact with L chain but there are other ways where if you want to have a complex problem and you want to break it down to simpler problems that's what we call the Chain of Thought So Lang chain helps you uh divide this big problem into more uh subset or you can have a sequential thought process to uh dissect the problem and understand what what is the best solution so that's where we are going next we'll be exploring how Lang chain can be used the Chain of Thought Library can be used to dig deep deep dive analyze a problem find a solution find what is the best solution so we're going to analyze a problem through a four step methodology so let's move move on to that example so let's get back to the chain of thoughts demo so like usual invocation you can always call the the vro class and provide the model ID so again here I'm using the same anthropic Cloud V2 and uh we try U trying to brainstorm a solution on sustainable energy in this example so what I'm going to do is I'm going to uh put the idea here so which is the stepbystep and sequential thing thinking where we are the question here or this challenge here is we are facing a challenge with sustainable energy I need to brain stop three Innovative Energy Solutions consider factors like environment impact scalability and cost Effectiveness so this is the prompt that I'm starting with and then I am creating the prompt with the library or a class called prompt template within the L chain prompts which is from here right here look at this this is a prompt template so this template can create prompt PRS or play with the proms understand the proms much better so what what it does it it's saying that it's a challenge here at the same time there are some considerations also part of the The Prompt so it's giving the uh idea on what the prompt is having what kind of content it has and how you can break down the content and then you pass the the prompt in here and that becomes the prom idea prompt object and after that is created now you call the The Chain of Thought Library which is called the llm chain and then you pass the llm here the model which is the Bedrock model which we created initialized it right here they're passing the Bedrock model and the The Prompt which is formed right here and the output will be part of the IDS section so that's the key that where the output will be generated so that it knows where to pass the information from and then after the first step is complete once the model is invoked it will provide some solution and now the second step is the evaluation step where we are evaluating the three propos Energy Solution and their friendliness and feasibilities Technology requirements long-term effects and give the success probability so now you are like after the first response now you be deep diving into the first response and say Hey I want to evaluate now some of these idas so again you create a prompt and now you're passing the idas into it this is the evaluate prompt and you can see some of the variables right here idas and the idas will be passed from the previous prompt here and then you create a model and create the output key which is evaluation so understand that the output key is the IDS here and that's the variable in the second prop so now you're adding more context to it you're giving like hey this is the result now I want to evaluate so that's where the Chain of Thought comes into picture so once they evaluate the idea then moving on to the uh strategy development process here so where we can detail the strategies to implement some of the err including necessary resources potential Partners way to overcome challenges Etc so you see that output key from here becomes the input here and that's how uh the uh output key works so you're passing the uh response from the second one to the third prompt right here and then again we are creating the strategy prompt you can name how whatever you want here and we are saying we are passing the evaluation here and then this the we passing the prompt and then invoking the mark model again capturing it within the strategy output key and then comes the fourth step which we can sayy rank the Energy Solution based on the analysis so once everything is done we want to make sure what is the best solution what is the best way to rank and based on feasability and sustainability so that's where the third uh step four comes into picture where it uses llm and then hey provide me the final recommendation so if you have just attempted in one prompt they would have just given you three solutions and you have to like keep on prompting to get the best answer so that is what we are trying to make using BL chain here so it's called a chain of thought so similar how you uh prompt chck to get the right answer you keep on prompting to Deep dive into some of those responses so that's how we can do using L chain and especially on the bedroom models and once the uh all the promps and everything is created it creates a sequential change so this is how you can sequence the idea generation so first of all you go with the idea chain which is right here which is the very first LM invocation and then comes the evaluation change then comes the strategy and then comes the recommendation you can play with the sequence here but this is the sequence where we want to this is how we want to solve a problem understand what's best and then it finally it'll output to a final recommendation field so that it can easily extract it and then review the results so in the final statement where we are saying challenge is sustainable energy solution cons these consideration and we are providing these inputs and and let's see what the output is so I'm just hitting run itic view it is going through the chain it's hitting the model back and forth getting the response forming more context to it and then hitting it back so it's going through step one which is the idea part second step two which is the evaluation chain third is strategy chain and then comes the final uh recommendation on what how what is the best route to go so it takes a few minutes is hitting the models again like I said the there a cost associated with these on demand hitting the model so I'm trying to understand this will really help you manage the cost again if I'm doing it right here so you don't have to repeat the same steps understanding this is the key take away from here and now you can see uh the finished chain that's how it's defined so it's saying this is a consideration and uh this is a question and this is a uh ranking based on that it says the uh Advance solar photo volt ex cells might be the best option and then why it is a best option so some of the information is clearly given here and uh this is how you can thought provoke the Bedrock models get the best response back so this uh this is it uh and uh we will Deep dive into other features of tchain this is just a Chain of Thought there are other options where you can build agents you can get uh rank the response uh you can connect to a vector database so many features which I will deep di but I want to start with something very simple something that people do on a daily basis on your chat gbd or some of those U open llm models so so that concludes our demo one more thing I want to highlight is that when you work with bedrock models it's typically deployed within a VPC or can be deployed on a VPC and ensure that none of the data that you train the model or prompt the DAT the the Bedrock model will be uh shared with other customers as well as used to train the existing model so this means your data is secure unlike some of those open- Source llm models the data is completely secure Amazon mentions that they value the data privacies so that's one thing we have to keep in mind for more information around data privacy please visit their website and understand how it works so with that we have reached the conclusion and I will be releasing more videos on uh the Lang chain interaction with the Bedrock models and till then please like And subscribe my channel this would be really helpful will help with the YouTube algorithm as well as an encouragement for me as well so thank you thank you everyone see you in the next video bye
Info
Channel: CloudTechInsights
Views: 41
Rating: undefined out of 5
Keywords:
Id: b_xOUcq73_U
Channel Id: undefined
Length: 14min 23sec (863 seconds)
Published: Wed Jan 17 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.