LangChain v/s Llama-Index | Detailed Differences | Which one you should use?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone welcome back to my YouTube channel I am back with another exciting video so in this particular video we'll see the detailed differences between the Llama index and L chain so guys uh recently I started my YouTube channel and I uploading the content related to the generative AI so if you haven't checked with my YouTube channel guys so please try to go and check if you want to learn the and2 and generative AI so uh you can search about the sun Serv and then you will get my channel here on my channel you will get the different different videos related to the generative AI I'm uploading the video in a sequence uh I'm not uploading any random video on my YouTube channel so here basically you will find out the complete and detailed playlist on top of the generative AI so whatever content I will create I will create in very detailed way so in this particular video we are going to discuss the detailed differences of the Llama index and the Lang chain so so far far I discuss about the history I discuss about the Llama index I discuss about the Lang chain now here in the live session I I completed one project also I have created the question answering system okay so uh whatever uh content I will be uploading on my YouTube channel related to the Jour AI mlops machine learning deep learning I'll will be creating a sequence itself so guys please try to follow and uh if you want to learn more about data science machine learning operation deep learning about anything related to artificial intelligence you will get everything on my YouTube channel so let's begin with the topic so here I hope uh you got to know about the topic the topic name is llama index versus Len chain so why I uh why I selected this topic because I was uploading the video related to the Llama index and here is two main framework for creating the application and here one more thing guys I would like to tell you so I'm covering each and everything in very detailed way okay in very detailed manner so the video can be quite long if you will look into my video guys so you will find out the length around 35 minute 40 minute 45 minute okay so uh yes so the video can be a quite long right the further video as well and uh I will be covering up each and everything uh from now onwards in a little slower manner so guys uh if you are finding that it is little slow then you can speed up uh that particular video you can watch it on 1. uh 25x or maybe 1.5x then you will get the better experience right so this is just a my this is just one suggestion from my end that's it now uh coming back to the topic here so let's try to discuss about the Llama index and L Shen so guys uh here I have created the beautiful notes for all of you and you can see each and everything basically I kept over here so step by step one by one I will try to explain you each and everything and believe me after watching this video your every doubts will be clarified related to this llama index and Lang CH okay so I was uploading the video Rel related to this llama index uh and uh I was getting many questions sir uh what is the differences between this langen chain and llama index because there is two measor framework for developing the application by using the large language model so let's try to understand uh like one by one and in upcoming video basically I will uh I will explain you about the rank uh retrieval argument uh generation and I will discuss about the vector database and all and then I will come to the project because guys uh here in this generative AI each and everything about the rxs only in sometimes the uh the thing basically the r okay re argument generation it will be clear to all of you and in the upcoming session I will be coming up with many Rec techniques not even one the advanced Rec technique different different Rec techniqu so at least I will show you s to eight Rec technique along with the python code but here this session is very much important to understand about the Lama index and the Leng CH so here the first thing is what so first we'll try to understand understand that what is a llama index okay then we'll discuss about what is a l chain then uh we'll try to understand the differences between Lama index and lenen and again see uh you only will give the answer after learning all the stuff from here related to this particular question related to this particular topics then we'll understand where should be where we should use which one right where we should use which one means whether I should use the Lama index on the length chain then can we combine both for our application building so these are some common question and until we are not going to ask the right question to ourself guys definitely we we won't get the right answer as well so the question should be right and definitely we'll get the right answer related to that question okay so here guys before starting with the Lama index and Lenin let me talk about this Vector database and this retrieval argument generation again I'm saying I'm not going to discuss this thing in a very detailed way in this particular session uh so in the UPC coming session basically I will uh discuss about this r r retrieval argument generation and then I will come to the vector database and then basically we'll try to create couple of more application and finally I will be coming up with many uh finally I will be coming up with uh complete and detail playlist on top of the lch at least 20 to 35 video 25 to 30 video basically I have recorded on top of the lch and soon you will get it right so first of all let's try to understand about this Vector database Vector database and this uh uh Vector MD right so here actually the thing is what Vector database and Vector MD let me clarify it over here so Vector embedding embeddings and Vector database now guys uh see whenever we are talking about the vector embedding Vector database so let's try to understand definition of it so Vector iding uh Vector iding are numerical representation Vector iding are numerical representation of the data and for restoring this Vector we use Vector database right so here here let's say we have a vector database so here in this particular database what we do guys tell me we store the eddings and Mings is nothing embeddings is a numerical representation of the data itself so the data can be anything data can be image data data can be Text data right so related to any sort of a data what we can do we can generate an embedding and we can store inside the data base right so don't worry I will be coming up with a dedicated video you will get to know about it now the second thing is what the second thing is rag retrieval argument generation now let's try to see the definition of this retrieval argument generation so here retrieval argument generation is an architecture used to help llm model like gp4 gp4 jimy to provide a better response by using relevant information from additional Source okay and it reduce the chance that llm will lead to the incorrect information so here I have included one image also so here you can see the image so in the in the image uh like uh the real sentence is Vision Without execution is huc right so here they have rectified this sentence it's like in a sarcastic manner so I took this particular image from the LinkedIn only now here you can see they are saying llm without rag it's a halucination so if if you don't know about the halucination means uh if you are not using the rang okay if you're not using the rag in that case it it might lead to the misleading information it it might lead to the misleading information the information might be incorrect if you're are not using the r system now guys see uh whenever we are talking about llm I will come to this particular architecture because every llm application whatever llm application you are seeing is just all about this architecture and whether we are talking about this llama index or if we are talking about the Lenin right so every every framework basically they are trying to implement this particular architecture yes definitely lenen is having like like a too much functionality right uh like many functionality compared to this Lama index and I will be coming up those fun functionality as well but yes uh at the end basically we are trying to create this particular system only now here uh what is the main aim guys what is the main aim behind this Lama behind this llm so here let's say there is my llm here is my llm llm means what large language model OKAY large language model so if we are passing anything to this llm we want to generate an answer okay we want to generate an answer we want to generate something from this llm so here guys this llm actually it it might mislead to the information it might mislead to the information because this llm is not having every information so here let's say if if I'm going to ask this llm so can you can you tell me the current GDP or the G to the current GDP of the US or the current GDP of the India so definitely it meet it it like mislead to the information because this llm has not been trained on a recent data right it has been trained on like a like on a like uh a specific day till the specific date only if you will look into the GPT so it has been trained till 2022 if you will look into the Jimmy so it is more better compared to the llm but this llm is not having each and every information and not having the information related to the particular uh or related to the specific task right so for that only we use the rag for that only we use the rag in the rag actually what what we do you know so we can we connect this llm with our with our like a database right we connect uh we connect this llm llm is what large language model with our rag so how efficiently we can create this rag okay how efficiently we can create this rag or how useful this rag is right how useful this uh this rag is so this depend on this llama index and this length chain framework are you getting my point so at the end we are doing nothing so at the end basically we wanton want this llm right we want this llm but this llm actually it is uh it is not able to provide me each and every answer it it might mislead to me it is not going to provide me any uh like it it is not able to provide me any uh like a specific uh answer related to the specific problem so that's why what we do we attach the database along with this llm so that I can get the better output and uh that is the meaning of rag right so don't worry in the next uh video I will uh give you the details will uh I will give you the detail meaning of this argumentation what is the meaning of this argument and uh definitely I will be coming up with that particular video very very soon now here if you will look into this particular architecture so what we are doing so in this architecture you can see so here is what here is my PDF right here is my PDF now what we are doing we are converting this PDF the data actually into the text right means into the chunks basically right then what we are doing we are going to store it into the uh we are going to convert into a embeddings and after converting to the embeddings what we are doing we are going to store into the database right now here uh till here actually what we are doing we are storing right we are storing now what we are doing guys tell me we are goinging to we we are storing now here what we are doing so we want answer basically we want answer from my llm so here this is my a this is my main aim but as I told you that this llm actually it does not know everything it it does not train on every specific problem and it mean it it like mislead to the information as well it it me it mislead to the information as well let's say if we are going to ask any domain specific uh question to this llm let's say if I'm going to ask the medical related question let's say if I'm going to ask any e-commerce related question right so it it might lead M uh it it mislead to the information that's why what we are doing we are going to connect this llm basically we are going to connect this llm with this with this like a database and here whenever we will get the output we are not getting directly from this llm right how we we are getting it we are getting V right V let let me show you how so just just focus over here and let's try to understand so whenever we are the user is asking anything let's say user is asking what is the neural network so here you can see the uh the question is going to be converted into the aming now here is what here is a question I'm reading now here what we are doing we are doing a semantic search it is going through the like vector database and from here we are going to find out the relevant answer we are going to find out the relevant answer or ranked answer ranked answer okay then we are passing to the llm this answer and the particular prompt this which this user is asking and then this llm will uh this llm basically it will provide me a like rectified answer means directly we are not getting answer from here we are getting and directly we are not getting answer from the llm also we are getting via okay via means whatever uh like a response this Vector database is generating okay based on the similarity and all we are passing it to the llm okay we are passing it to the the LM and llm is providing me llm is providing me a specific answer okay with a proper grammar and all I think you are getting my point so these thing actually these this particular architecture only this is called like a rag system rag system and either we can uh do it by using the Lang Lang chain and we can do by using the Llama index now where this Lang chain comes into the picture where this llama index comes into the picture we'll have to understand uh this particular part because uh that is a very very important and definitely if you are going to build any sort of a project so uh like uh either you will use the Llama index or you will use the lenon or you will use both together when you will use both together you will get to know in some time okay I hope you are getting my point that what I'm trying to explain you over here got it now uh let me summarize this thing so Vector database for restoring the embeddings retrieval argument generation means what retrieval argument generation is a architecture used to llm model like GPT jimy to provide a better response by using the relevant information right and here you can see this uh sarcastic uh like a sentence also right so llm without rag it's a alation so what is the meaning of alation over here so it might mislead right it might mislead to the information uh I hope you got this particular point now coming to the next one so whenever we are talking about the Llama index guys whenever we are talking about the Llama index so here you will find out two word one is llama and the second is indexing right so uh here this llama index actually mainly mainly it has been uh created for the searching retrieval and indexing right so here I have written couple of points and with that basically you easily you can understand why people has or why the person okay the founder of the Lama index you can check with the introduction video of the Lama index there I have explained you in a detailed way very detailed way right so why the person has created this llama index uh Zer Leu I think that was a person name I have shown you the LinkedIn ID of the person as well you can go and check I have given you the complete detail in that video now uh if we are talking about the Llama index so here this Lama is a model name and this index is what tell me this index actually it is related to the uh vectors okay so Lama maybe like he was working with this particular model and he were trying to like uh fetch the result based on some uh queries and all right so that's why this framework basically the name it has given the Llama index only now when we are talking about the Llama index so here is a couple of point which I have highlighted first is data inje the second is search third one is retrieval then indexing and then ranking and structuring right so llama index actually it work on this five principle it works on this five principle now whenever we are talking about this llama index guys so you need to be remember these particular points right you need to be remember this particular points and then definitely we'll get to know this five uh Five Points as well so here the first is what llama index is specifically designed for building search and retrieval application okay so here if you will look into this particular architecture so here you can see this is what this is my document we are converting this data into the text okay and then we are storing it in we are doing a embedding we are converting this data uh into the embeddings and then we are storing into the database so here guys here what what what what I was saying tell me so searching searching and retrieval searching and retrieval okay searching and retrieval now over here you can see so this is a user so we are not asking anything directly to the llm we are not asking that we are avoiding it because it might mislead to the answer we are not asking directly to the database also so we are asking why okay from database to llm to the user so if user is asking something so it is searching actually where it is searching related to this particular uh prompt okay it is searching inside the database and based on that based on the semantic meaning based on the like a similarity like cosine similarity or the dot product basically okay we are trying to retrieve something okay so searching here the searching is happening and retrieval right and how how it is possible how like uh it is possible because of the indexing right because of the indexing so here this llama index actually it provide the searching it provide the retrieval it provide the rank result based on what based on the indexing so here from here from this data to this part from data to this part means till database or till this rank result actually here is a specific work of the Llama index and for that specific work only they have designed the Llama index and I hope now guys with this particular architecture this word is clear to all of you now let's try to understand these few points which I have written over here step by step we'll try to understand so llama index specifically has been designed for building search and retrieval application llama index provide a collection of feature for integration custom data into llm okay so we can uh like connect our custom data to the llm how we are doing it here we are doing it now this is the this PDF is nothing it's my custom data and this PDF actually I'm going to connect with my llm means I'm not going to directly ask to my llm because it might mislead to the information so we are going to connect this custom data to the llm and then on only we are fetching the response then only we are trying to get a response okay now here guys you can see the second one uh third one basically using uh this llama index you can connect your unstructured structure and semi- structure any type of data with the llm so here there is no such boundation if you have structure data which you have kept maybe in the CSV in the tsv if you have un structure data like this a PDF or maybe images or any sort of a document everything you can connect with this llama uh everything basically you can connect with the LMS and this llama index actually it will provide you that feature okay now here the fourth point is what so llama index can be an ideal solution if you're are looking to work with a vector Ming right so that's why I have explained the vector embedding and Vector embedding is nothing it's just like a just a numerical representation of the data I will be coming up with one more series that is called NLP foundation and with that definitely I will try to explain you the vector embeddings and all okay I will be coming to the vector databases and all so uh each and everything each and every idea will be clarified if you are following to my channel don't worry you no need to worry about anything every doubt uh I will try to clarify over here itself okay so please try to watch this video till the end and you will get the lot of information now here uh like you can see so unstructured structure semistructured is correct now Lama index can be ideal solution if you're looking to work with a vector embedding why because we are storing we are searching right so we are storing we are searching searching and we are retrieving it based on what based on the index right so we are storing we are search searching we are retrieving it based on the index and based on the rank and this is the core functionality of the Llama Index right now here this llama index actually it is available with many plugins okay from there you can load the data from many resources easily now if you will look into it so definitely you can check here itself over the website itself they have given to you let me show you so here uh basically there is a uh like a document also you can go through with the document of the Llama index I will be coming to that now here you just need to search about this llama Hub right so once you will search about the Llama Hub guys so here you will find out all the integration now in the all the integration so data loader agent tools llama packs llama data set everything basically they have provided you over here so please go and check with this llama how you'll find out the different different like data loaders you'll find out the different different agents what is the agent guys so agent is a third party like a software right so by using the agent we can connect our uh we can connect our llm model with the third party apis and all so that thing also I will try to cover up don't worry now just just go and check um basically they have given the entire codes and all everything right so this Al Lama Hub actually it's a very important if you are going to create any sort of application by using this uh llama index got it now uh here I will be coming with the documentation and all don't worry I will like highlight some sort of a functionality once I will coming up uh well once I will be coming up to the differences and all now here you can see so now the last point is what it simplifies the process of querying llm and retrieving relevant document based on the user input getting my point so here if we are talking about the Llama index if someone is going to ask you that why we use the Llama index you can highlight this five main point the first one is what the first one is data injection the second second is what the second is a search the third one is a retrieval fourth is the indexing fifth is a ranking and the structuring right so it has been designed in such a way the code has been written in such a way that it is very very efficient for this particular thing now where it comes inside the project tell me so it comes inside this particular part from here tell me guys from where to where let me like uh clear this uh thing okay and then uh and then you will get in a clearcut manner so let me erase it each and everything now here what I can do I can uh again explain you that so here guys you can see so here is my data this is what this is my data now this data actually I'm going to be uh like uh divide into a different different chunks then I'm going to create an embeddings from there and then we are going to store it now whenever uh user is asking something then we are going to find out a semantic we are doing a semantic search and based on that we are going to do a we are going we are getting a ranked result means the possible result right and here we are are we have done the indexing and all because of that the process is very very fast means the entire code has been written in such a way so I hope this thing is clear to all of you related to this llama index okay now coming to the Lang chain now see guys uh here uh like you can see the Lang chain but I can write the Llama index also so that that understanding will be bit more clear to all of you now here guys you can see so if we are talking about the langen chin I have written couple of Point related to the Len chin also so Lenin is is an open source framework now now just try to read over here lenen is what Lenin is an open source framework designed to simplify the development of application right development of application powered by large language model okay now it goes beyond to basic search and retrieval like llama index and provide a comprehensive toolkit for building more complex and interactive llm application so there is five main thing which llama index follow and this is the main one work of the Llama index but here guys this length CH actually it is more than this one it is more than this one and here if we are talking about the retrieval system so yes it is having their own retrieval system you will find out the code over here as well related to the retrieval system means the same the functionality basically which I'm trying to show you over here right the functionality basically which I'm trying to show you over here but okay but here this Alama index is a more efficient compared to the Lang CH and here guys llama index is more efficient in this five thing but but if we are talking about the end to end application so instead of the Llama index what you will use you will use this Lang chain getting my point because langen chin is having a comprehensive toolkit okay there are so many tools and all you will find out over here inside the Lang chain now let me highlight the tools basically what all tools you have inside the Len chain so here first is prom templating then second is llm interface third is Agents memory chains call backs retri module Lang s Lang Smith Lang graph so these are the core module or the core tools of the lch for developing any sort of application if you don't know about it guys don't worry I'll will be coming I will be coming up with very detailed playlist on top of the Lang Chann and definitely after that you will be getting you will be getting it's my promise to all of you now let's try to understand in a short manner that what is this prom template LM interface and all so prompt template for Designing The Prompt right so we can design the few short prompt by using the prompt template we can design the different different type of prompts right llm interface means it provide you the interface whenever like you are going to serve your application by a RPI and all so it will provide you the interface for querying your application so by default you will find out the interface over here inside the Lenin itself now agents is there by using the agents basically you can connect with the third party apis third party apis and uh that functionality is are very very useful which is uh there inside the Len Chen Now Memory basically by using using this one you can sustain the memories okay you can sustain the memory of the chat you can sustain the memory of the chat then chains is there this chains is a very important functionality and because of that people are using this link chain so in the chain actually you are going to connect a multiple component what you are doing guys tell me you are going to connect a multiple component now callbacks is there callbacks is one of the functionality basically so here uh let's say if you're going to train your model okay means uh if you're going to perform the fine tuning at that time this call backs comes into the picture means you can Define this callbacks so you no need to like burn your resources unnecessary by using this call backs you can handle that retrieval module yes this L chain is also there uh Len chin is also having this retrieval module but the Llama index is having a more efficient one right then length serve is there for uh serving your application you can serve your application by using the Len chain uh length serve Lang Smith is there okay L graph is there so Lang Smith basically so Lang Smith I think it will provide you one interface okay similar to this Alm interface you can go and check with the documentation I will show you I haven't explored this Lang Smith it's a recent functionality which has been added I have used the length sub and L graph also it is also a recent functionality so in the L graph you know what they are doing so in the L graph basically they are going to connect a multiple chains also so we have a we have a like component we have a component then uh we are going to connect a multiple component inside the chain right inside the chains and then we can connect the multiple chains also by using this length graph and we can create the graphs and all right so we can create our own dag okay direct a cyclic graph we can create our dag or we can create our like a cyclic graph and all everything is possible over here in the L graph it is for the advanced functionality okay so this is the main tools of the length chain now here guys if you want to make a differences between this uh Len chain and the Llama index I can give you very very simple example this alenin right so here I can write the Llama index llama index and and here I can write the Len chain so this is what guys tell me this is going to be a length chain now here I can write so lamb index actually I can uh you can think this llama index is nothing it's a person right it's a person it it's one of it's one like guy okay who is working in the industry now here is this Lang CH also you can treat this is a this is as a person now this llama index actually he knows about the DSA very well DSA very well okay he he is very very good in the DSA very very good it just the scenario which I'm trying to give you right so it's very very good in the DSA now here on the other hand this Lan actually he knows about the DSA plus he knows about the web development also web development also so this Len CH actually it is having a upper hand upper hand because it is having so many functionality now this uh Lang llama indexes is very very good in what it in the searching in in storing okay in storing and in a retrieval in retrieval in indexing in indexing Ing and in ranking okay and a structure of the ranking structure of the ranking right structure of the ranking now here guys we can do this thing by using the Llama index very very efficiently the code has been written in a very beautiful manner now if we are talking about the Lang CH yes retrieval module is there retrieval module is there but it is not for the advance it is not for the advanc retrieval if you want to do advanc retrieval if you want to create an advanced rag in that case definitely I will have to go through with the Llama index so so this person he knows about the DSA little bit okay little bit and he knows about the web development very well now this person actually just know about the DSA and he's very very weak in the web development means it is also having some sort of a functionality related to The Prompt templating related to the uh like chaining it is not having chaining actually it is using the Lang CH only for that but yeah it is having some sort of a functionality like agents like form templating okay and uh but yeah it is having a advanced retrieval uh system getting my point I think you are getting that this Lama index is a person who knows the DSA very well but it's very very weak in the web development this Len CH actually he knows the DS a little bit but it is very very good for the web development so uh if like we want to create any efficient rank system in that case definitely I will have to use the Llama index over there and if you want to create an end to end application end to llm based application so in that case I will have to use this Len okay now here the question is I have written a couple of more question so in conclusion I can say that llama index is a optimized for indexing and retrieval now the second thing is what you will be fine with just langen means if you want to create a general purpose application so you will be just fine with the L Chen because Lang Chen is also having the retrieval system but if you want to create any advanced Rec system which which is like a very very fast or the optimized one in that case you will have to use the Llama Index right you will be just fine with the Llama index uh okay if you like if if you will just find with the Llama index want too much searching retrieval on the index using a wrapper of framework on top of it means what I'm trying to say over here I'm saying that you will be fine with the Llama index if you just want to do the searching retrieval okay or you just want to create a rag with not so many functionality right if uh and here guys if you will uh if we are talking about the Llama index now so now this llama index is uh coming up with many updation in all so here on a daily basis actually you will find out the uh like up gradation in this particular Library why because U see uh it is a open source and everyone is contributing in this one and now you will find out the many rer in this llama index so lenen also you can directly use through through this llama index if you're using the Llama index now directly you can use the lench also directly you can use the lench also through this llama index because now the rapper is available on top of the Llama index it's a open source Community is already working for it getting my point now here the last one is you can combine both llama index and L chain for the enhancing your application capacity and that is what basically I was trying to explain you so here I have mentioned it inside the last one so which one you need to use when so llama index searching and retrieval if you want to get better search experience okay means if you want to build an advanced R application length chain if you want to build end to end application with the most comprehensive functionality now here point to be noted guys please try to be note this particular Point both tools here both tools you can use together to enhance your rag application getting my point yes you can combine both tools llama index and Lang chain as I told you right so in the latest updation in the latest version of the Llama index you will find out the lenen as well the rapper is available okay rapper is available so I hope now you got the clear-cut idea what is the Lang Chen and what is the Llama index okay now again I can give you the quick recap of it so here whenever we are talking about any llm based application so guys here what is the main name of the llm based application I want to generate something from the llm from the large language model but this llm actually is not having everything so it might mislead to the information in that case what I have to do I have to build the rag system so whether we are talking about this Lang chin or this llama index it is trying it it is trying to help to build this uh particular system the rec system retrieval argument generation and don't worry I'll will be coming up with uh different different type of Rec system and by using this llama index and the langen Gen itself I will write the code in front of you okay now here you can see so PDF is there then uh chunking right we are doing a embedding and then we are storing into the database now user is not asking directly to this uh database user is not asking directly to this database so here we have a combination of both combination of this Vector database and combination of this uh combination of vector database and this llm so here we are getting a refined output okay because here here we are doing a semantic search and based on that we are getting a rank rank result and our prompt our prompt and this rank result we are both passing to this llm and then only we are getting the answer okay the user is getting the answer so the answer will be more refined and it will be based on the requirement now here from here to here from here to this database actually our llama index comes in the picture and if you want to create a comprehensive application right like if you want to deploy it if you want to connect the multiple component all together if you want to sustain the memory and all then definitely we should use the L Chen on top of it okay Lang Chen along with the Llama index now if you will look into my YouTube channel guys so here on my YouTube channel basically so Ive created one application just by using the Llama index and Google Jimmy here you can see the complete detail here I haven't used the Len CH so yes llama index is also having those component but not up to the mark not for the bigger application I told you now one person knows about the DSA very well but weak in the we web development one person knows about the DSA and very very good in the web development so by using just using the Llama index also we can create the application the and to and application if you want to check guys you can go through with this particular video now here I'm coming to the documentation now so here in the documentation this is the documentation of the Llama index okay now now just check with the documentation I will just take uh two more minute and I will conclude this particular session just try to understand here so here uh use cases for the question answering for the chat bot right now here you will find out this uh llama index is also having agents agents for what tell me putting your rag pipeline together okay we can use it we can connect your rag Pipeline with some third party API now here you will find out so we have the indexing storing quering our evaluation that thing basically which I was trying to explain you now here you will find out the advanced rank so here you will find out see uh evaluating and all it is there now Advanced retrieval system now this uh this L chain is not having any advanced retrieval system but it is is having now here they have given you that if you want to create the uh if you want to build your R system from scratch so they have given you that code as well and I I'm going to like uh nail every each and everything in upcoming session don't worry here I'm just giving the understanding and if you want to go through with it definitely you should go now building data inje then building uh retrieval then uh here you can see evaluation and all everything you can perform here itself text tosql and even you can query with a different different uh like a uh different different uh B basically uh different different like data also like structured data or unstructured data or sem structured data any type of data over here now this is also this uh this is also giving you the models right different different models and it is also giving you the like prompting loading data indexing but not up to the mark but yeah for the retrieval and for the searching it is very very good now if you will look into the lench and documentation so here in the lench and documentation actually will find out it is giving you the comprehensive uh like it is giving the comprehensive like a tools tool kit for building the application so model is there then retrieval is there see it is also having a retrieval but like just a simple one okay just a basic one I told you know like weak in a DSA but very very good with the development now here you will find out so indexing and all everything is there but not like uh the advanced one so uh this llama index and lch you can connect together for creating a better uh or Advanced or system right Advan or if you want to make a capable system right or any Advan system so definitely you can connect both okay this llama index and this Lang CH now here you can see L serve is there I told you L serve so Lang Ser help develop deploy application so by using this Lang Ser you can deploy it Lang Smith is there so Lang Smith help you to trace and evaluate your language model okay so I was saying you know that I haven't explor it so it is uh going to be used for the tracing and for the evaluation now Lang graph is there so Lang graph is a library for building stateful multi actor application with LM means you can collect multiple component all together in whatever manner you want graph you know right graph so we are going to connect a multiple component so I hope guys this thing is clear to all of you now it was just an introduction I hope you like this notes as well definitely I will share with all of you this refined notes and I will be coming up with many more uh videos like this first I will focus on the rag and this Vector database I will try to clarify that a particular concept and then we'll try to build few more advanced applications so guys if you like this video then please hit the like button please subscribe the channel we'll meet you soon in the next video Until thank you bye-bye take care guys
Info
Channel: Sunny Savita
Views: 3,134
Rating: undefined out of 5
Keywords: langchain, llamaindex, vectordatabase, embeddings, searching, llms, gemini, gpt4, chatgpt, google gemini, sunny savita, sunny savita ineuron, sunny savita generative ai, artifical intelligence, machine learning, deep learning, mlops, pinecone, weviate, nlp, transformer, diffusion model, llm application
Id: EoauGRf_VCA
Channel Id: undefined
Length: 38min 32sec (2312 seconds)
Published: Tue Feb 20 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.