Webinar - Generative AI Revolution : The Future | 2024

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello guys am I [Music] audible am I Audible am I visible please give me a quick confirmation everyone I hope everybody's able to hear me out hello hello hello yeah so we'll just make sure that everybody is audible to me everybody is visible to me so hello sir will this session be recorded as well yes it will be recorded it'll be available in YouTube uh visible and Audible both amazing I hope everybody's able to hear me out gives a thumbs up something show some motivation sign so that I will be teaching a lot many topics in this session so I can also see people from LinkedIn and all are also able to hear me out perfect great yes audible okay perfect so first of all how are you all I hope everybody's doing fine yeah so we all are live from different different platforms uh in LinkedIn and uh ion Hindi YouTube channel in ion intelligence YouTube channel even in Instagram so just give me a confirmation if everybody's able to hear me out properly and all right so can you overview how llm is different from traditional application yeah so I'll go to the agenda of this session what all things we are going to discuss what all topics are going to get covered um it'll be a 2hour session probably let's see till the end I will be taking a lot of queries from your side but at the end of the day we are going to understand about generative AI you know so everything that is required for generative AI from road map from understanding like how to learn it what all things are important in this you know how an llm model is trained you know by considering a a research paper also I'll probably consider in uh I I'll show you some research paper and then with respect to that I'll try to understand how llm models are trained what are large image models how it is different from the traditional machine learning and deep learning algorithms that we specifically have and uh I hope you have heard about AI machine learning deep learning data science where does generative AI fit into it where does llm models large language models and large image models fit into it everything we are going to discuss about about it okay so I hope everybody's excited for this session if you're excited please do hit like and uh I will just go ahead with the agenda and start sharing my screen okay so just a quick confirmation I hope everybody's audible like I'm Audible for everyone over there so hit like make sure that you subscribe this channel because every week I will probably be coming up with live sessions like this where I will be taking a two-hour session to make you understand some amazing things in the field of AI okay so quickly let me go ahead and share my screen and if you are able to see my screen just let me know okay so if you are able to see my screen just give me a quick confirmation everyone so we will go step by step we'll go with the agenda we will try to understand many things as such you know topic by topic I will be writing in front of you wherever Google is required I will take the help of Google I will show you research p papers and many more things as well okay so let me just see in LinkedIn also whether I am visible or not so I'm just going to see in multiple places uh so I'm excited about this sir as an India llm for large language will be taking flight off yes many many companies are specifically using in use cases and all so it'll be quite amazing so let me see whether we are live in I neuron LinkedIn page or not okay just give me a second okay perfect I can see myself over here there are chats there are messages that are probably coming up okay great okay let me hide the current comment okay so what is the agenda of this specific session what all things we are specifically going to discuss so the first topic as usual is to understand what is generative AI okay so we are going to first of all understand what is generative AI okay because you may have heard about machine learning deep learning you may have heard about natural language processing where does it exactly fit okay so we'll also be able to understand it then after completing this we will try to understand how llms model are trained so what what does llm model basically mean large language model okay okay we'll also understand what is large language model when we are discussing about generative AI so the third thing we will be discussing about open source we will be discussing about open source and paid llm models and which one you can specifically use if you don't have m money enough what you have to really take care of see at the end of the day these all are models okay and uh if you have powerful gpus and is it possible that you can also train your llm model from scratch yes so everything is possible I will be taking up making sure that I'll explain each and everything okay right now the models that are very much famous that are in the industries right now right so you may be hearing about chat GPT right so chat GPT I will just write the model name let's say GPT 4 right we will discuss about some some of the good open source models like Lama 2 we will be also discussing about gini Pro right why gini pro why gini Google gini right I'm not talking about Palm or B Google BS and all right Gemini Pro um recently Google has launched this three amazing llm models right three versions of gemin models right and Gemini Pro right now is available for everyone out there to use it to create an end to end project and it is completely for free right you can probably use 60 queries per minute you know you can you can actually give somewhere around 60 queries per minute for uh for using it in your use cases yes soon it will also be coming up with u paid models uh if you want more queries to hit right so it is good to start with all these things I'm just given some list of llm models over here other than this there are a lot of llm models also open source llm models like Mistral right so I hope everybody has heard about all the specific things okay so let's see how many topics I will be able to cover step by step and if something is remaining again we will continue in the next week Friday session so first of all let's go ahead and understand about generative AI okay so the first topic we will go ahead and discuss about what is generative AI okay so everybody under understood about the agenda what we are specifically looking at right agenda we'll also be discussing about large image models also that question will also be coming up okay so there's a question a little bit about llm models and open AI I will discuss about it okay um okay I have a good knowledge of python okay I'll take up questions okay so but I hope everybody's clear with the agenda that we are actually looking at so now let's go ahead and understand with respect to generating AI now before understanding generative Ai and where does it fall in this entire universe of artificial intelligence you know so if I probably consider this as an example let's say and this this diagram probably I've I've taught in many of the classes I've explained you all and all let's let's consider the entire universe and this universe I would like to say that this is nothing but this is artificial intelligence okay this is nothing but this is artificial intelligence right and what is the main of aim of artificial intelligence is that whether you work as a data scientist whether you work as a machine learning engineer whether you work as a software engineer who specifically wants to harness the power of machine learning deep learning at the end of the day you creating applications those are smart application that can perform its own task without any human intervention so that specifically is called as artificial intelligence right so what is exactly artificial intelligence it's just like creating smarter application that are able to perform its own task without any human intervention okay so that is what AI specifically means so tomorrow you work as a data scientist you work as a machine learning engineer or you work as a software engineer who wants to harness the power of machine learning deep learning techniques at the end of the day you will try to create an AI application only right some of the examples with respect to this AI applications as I've already told earlier Netflix right Netflix is one streaming platform let's say movie streaming platform here an AI module is integrated right so there is an AI moduel that is integrated now this AI modu I would like to say it is nothing but movie recommendation system right movie recommendation system movie recommendation right Netflix is already a software product it is a movie streaming platform but we are trying to make it much more smarter so that it will be able to give us or provide us movies right recommend us movies without any human intervention see our human inputs will get captured over there what movies you like like action movies whether you like sentimental movies comedy movies all those information is getting recorded right but we are not asking human to to to take some decision in short this AI app will take its decision by itself right so this is what artificial intelligence is all about now coming to the second one if I probably consider the second one that is nothing but we basically talk about machine learning so what exactly is machine learning here I will be talking about ml okay now with respect to ml right what what exactly is machine learning machine learning provides you what it provides you stats tools stats tools it provides you stats tools to analyze data right to analyze data to to create models and these models will be performing various task it can be forecasting it can be prediction it can be feature engineering anything as such right but all these activities that you are specifically doing in this I would like to consider all those as stats tools it is providing this tool to do or to perform this work right so here you'll be seeing that we learn about different things like supervised machine learning unsupervised machine learning we learn about techniques wherein you create models that models are able to do classification regression problem statement forecasting Right Time series prediction right different different task can be performed with the help of machine learning right and machine learning initially what was famous if I probably say 5 to six years back everybody used to probably use machine learning techniques and now also they use it for most of the use cases they use it right but now because of generative AI now they able to think much more with respect to different different business use cases right so at the end of the day with the help of machine learning we are trying to do that okay still now coming to the next one what about deep learning right so deep learning is another part of or I can also say it as a subset of machine learning now what was the main aim of deep learning over here here we had to create multi-layered neural neural network multi-layered neural network okay multi-layer neural network now multi-layer neural network why we specifically require multi-layer neural network see we human being wants the application to perform like how we human being think right let's say I want an application to perform like how I am able to teach how I able to study in that similar way if I want to make a machine also learn in a similar way I have to use multi-layer neural network right and that is where deep learning was becoming famous and this is from 1950s but right now we have huge amount of data and if I compare the differences between machine learning and deep learning is that the more data I have and I train a deep learning model the performance also increases the same thing does not happen with respect to machine learning right so that is the reason we are discussing about multi-layer neural network and this is where deep learning come into picture and again they are different techniques that we have already learned about you know Ann CNN RNN these are the basic building blocks other than this you have seen about many like you have object detection rcnn you have YOLO algorithms in RNN you have lstm RNN Gru right Transformer B encoder decoder all these things you have specifically learned that all are part of deep learning these are solving some specific use cases some specific use cases right these are solving some specific use cases right so this is again deep learning is a part or subset of machine learning okay now comes I hope everybody's clear till here because this I have already taught it earlier also to all of you the reason why I'm teaching you over here is to make you understand where does generative AI fall into picture so if you if you are able to understand till here please give me a confirmation by writing in the chat yes yes no something you're able to understand over here right so just give me a confirmation give me a thumbs up okay give me something hit like for this specific video so that it reaches many people to so that you'll be able to understand because nowadays from the students who have already made transition specifically in uron they're working on generative AI they're working on llm application they're creating some amazing application to solve different different business use cases so that is why I am basically discussing about all these things right so I hope everybody is able to understand tiia perfect so I I'm able to get the confirmation they good signs like this and all so everybody uh is able to understand it amazing now let's go ahead and understand where does generative AI fall into picture again guys now if I consider generative Ai and what exactly is generative AI we'll discuss in some time but generative AI will be falling as a subset of deep learning okay as a subset of D planning so this circle that I am considering is nothing but it is a generative AI okay now why it falls why it falls as a subset of deep learning because at the end of the day we are using deep learning techniques also most of the llm models that you'll be seeing is nothing but is based on two models one is Transformers another one is BT okay these two models are super amazing models I hope you may have heard about something called as attention is all you need right attention is all you need right so attention is all you need there you have these amazing models Transformers and birds this are also called as encoder decoder sequence to sequence models these are the base of many many many generative AI models or llm models that we will be seeing okay so all these things you should definitely know it because these are the basic building block today we have so many models in the market we have chat GPT we have GPT 3.5 we have GPT 4 now GPT 4 Turbo is also coming right you have Lama models you have Falcon you have Mistral you have gini right gini Pro you have Google B pal models many many models are there in the Market at the end of the day they most of them most of them I know most of them has this as the base model that is Transformers of word right and many more things over here now considering this people or companies what they do they train with some different techniques they add reinforcement learning they do some type of fine-tuning to to make their model become much more better so that is the comparison that is basically made now recently Google came up with gini Pro so it started making comparison okay it is so much better than chat GPT sorry GPT 3.5 it is so much better than this particular model it is it is able to uh reach mmu of this much accuracy right human understanding accuracy is this much reasoning accuracy is this much all this particular information is basically the metrics right at the end of the day the base that you're specifically using is either Transformer or bird in short you're using this Advanced architecture of these neural networks and you're training or you're pumping you're training this models with huge amount of data and that is where someone will come and say hey this model has somewhere around billions of parameter everybody will get shocked wow billions of parameter wow amazing nice great we are going to get a good model then right but understand the context when we say billion of parameters that basically means how much how much weights is basically considered how many weights parameter are there how many bias parameter there how many so many things are there and many more things right I'll be discussing about gini Pro as I go ahead okay and I will be showing you some of the accuracy metrics also as we go go ahead once I see read the research paper but what exactly is generative AI I will discuss about it in some time now there is also one more thing which is called as llm models right where where large language models see in generative AI also we have llm models we have large image models L okay llm and L llm basically means large language models that basically means it will be able to solve any kind of use cases that is is related to text very much simple so whenever I talk about llm in short we are talking about text whenever I'm talking about large image model we are talking about images okay any use cases with that is with respect to images or video frames or anything as such okay so this is nothing but this is called as large image models right there is one more thing where many many people may have heard about it okay and recently gini Pro right it Google says that gini Pro is a multimodel what does multimodel mean multimodel what does it mean it basically means that it is able to solve use cases for both that is text and images text and images it is able to do both this task okay so that is why it is basically called as multimodel right most of the I hope everybody's heard about a tool called as mid Journey which is able to generate amazing images mid Journey right mid journey is what it is an liim large image model right mid Journey it is able to create image it is when you write a text it is able to create an image gini Pro what it does is that you give a image it'll be able to do all the object detection within then it'll be able to write a block for you right so all all amazing things are basically happening now why this is beneficial for companies because companies don't have to waste time startups don't have to waste time to quickly create some applications that solves a problem statement before everything used to happen from scratch they used to create projects they used to create models they used to do fine tuning they used to worry about data they used to do multiple things but now it has really become simplified okay so I hope everybody's able to understand about generative AI what exactly generative I'll just discuss about it understand this term generative okay and we will discuss as we go ahead di di is a large image model yes Del is definitely a large image model yes perfect right so guys still here if you have understood please make sure that you hit like it will motivate me because you want me to come in the next week also right every week Friday we will do a session where I will be teaching you all the specific things right so do hit like like let's target that till the end of the session we should make the like button hit more than 500 okay I want that right more than 500 come on you can do it huh see so nice handwriting in front of you you should be motivated by seeing this handwriting not your like a college professor and writing right I'm using multiple cols making it much more interactive and the best thing is that everything will be available to you I will also upload this um if you Pro probably find in the description there will be a webinar link over there everything I will try to provide you over there itself okay so Chad GPT is llm yes Chad GPT is using GPT 3.5 GPT 4.0 those are llm models large language models if Chad GPT is using di that basically becomes a large image model okay perfect great now let's go ahead and let's talk about so this is what I gave a brief idea where does generative AI fit into okay now let's understand what exactly generative AI okay still we are able to Now understand now what is generative AI okay let's let's go ahead I'll talk about Lang chain why Lang chain chain lit Lama index where does it fall first of all let's start with some Basics okay and here again two things are obviously going to come large language models and the second one is large image models okay so let's go ahead and let's discuss about this great now when we are discussing about large language models and large image models so first of all the question that you should be asking fine Krish what exactly is generative AI why why the word generative why the word generative at the first instance so see some years back we used to use traditional machine learning algorithm see over here first of all we started with something called as traditional ml algorithms okay we started like this so in this ml algorithm what we did is that we had to perform feature engineering we had to probably create a model right train model right we had to probably do fine tuning right right and then as we went we finally did the deployment now from traditional machine learning algorithm why did we first of all move to deep learning algorithm again traditional I'm writing over here as traditional DL algorithms okay Now we move towards traditional deep learning algorithms okay why did we move over here we saw that when we were increasing the data set even though we increase the data set and the best way to show this diagram is basically to create like this see so this is my machine learning let's say this is my graph this graph is with respect to two things data set and performance okay data set and performance now over here with machine learning algorithm with traditional machine learning algorithm you could see that when data was increasing after one point of time the performance of the traditional machine learning algorithm started bending in this way that basically means even though I increase the data at certain point of time right then also my performance was not increasing right but now this was the problem now with deep learning algorithms when I say deep learning algorithms I'm specifically using over here multi-layered neural network okay multi-layered neural network now with respect to multi-layered neural network as I started increasing the performance or as I started increasing the data set the performance also started increasing and this was with DL algorithms and this was with traditional ml algorithms this was with traditional ml algorithms now that is the reason deep learning become became very very much famous so most of them started solving problems such as supervised unsupervised machine learning techniques or deep learning techniques or problem statement with the help of deep learning algorithms and now you know right from object detection to NLP let it be any task from computer vision to NLP to any task you're also able to do with the traditional deep learning algorithm right now till here everything was good companies were working nice hugging face had so many deep learning algorithms probably now it has deep learning algorithms for any task that you want any task let it be any any task for any task you have a traditional deep learning algorithm available where you can download the model where you can do fine tuning where you can use transfer learning techniques and you can probably create your own application now this is where one amazing thing happened and I'll tell you that was the time you know uh where blockchain was also becoming very famous when blockchain hype was there you know mainly all the people were focused Mo most of the audience most of the people most of the researcher were also focused on web3 but there were also some good set of researchers who are focusing on something called as generative AI now here is what I'm going to draw a diagram for you to make you understand what exactly is generative AI first of all I will go ahead and write deep learning over here as you know generative AI is a subset of deep learning right now in generative AI with all the traditional deep learning algorithms we usually say this as discriminative now you'll understand what is the difference between discriminative models and generative models so mostly all the Deep learning algorithms is divided based on these two important techniques one is the discriminative technique one is the generative technique okay now in discriminative technique which all task you are focused on doing first most of the task like classify predict right or object detection or any supervised unsupervised technique here the data set these models are basically trained on trained on labeled data set right and this discriminative is with traditional DL algorithms okay traditional deep learning algorithms over here we specifically use traditional deep learning algorithms now let's understand about generative model and this is where your I you will get a clear idea about it what exactly I'm going to talk about in generative models the task I'm just going to write the task here the task is just to generate new data trained on trained on some data okay here what is the main task of generative model is that the word generative now you'll understand the main importance of this word generative here you are generating new data trained on some data set okay example write a write an essay on generative AI if I ask this question it will be able to answer let's say it has been trained with some huge amount of data that is available in the internet now I will go ahead and ask write an essay on generative AI should be able to give me the answer now let me go ahead and talk about one simple example so that you get a clear understanding what exactly I'm talking about with respect to generative AI a real world example because people usually like this kind of real world example okay and with this real world example you will be also able to understand multiple things right so let's go ahead and understand it with real world example and that is where you'll be able to understand about generative AI so how does a gener AI task look like okay let's imagine okay Kish is over here okay let's imagine not let's not take Kish let's take some person is over here and this is relatable okay this person is in 12th standard let's say it clears NE exam neat exam and now it is basically doing mbbs mbbs mbbs is specifically for becoming doctor okay now over here you will be able to see that how many years this person will probably learn in the college 4+ one right I guess 4 plus 1 four years of learning one year of internship so after learning for 4 plus 1 years will it be trained or will it learn from multiple book sources at least thousands of books yes or no will it will this person learn from many books at not tell me guys just give me a quick confirmation can you just read one book and become a doctor no thousands of books right thousand of books right so this person will be spending those five years reading thousands of books and after reading thousand books okay don't fight on the number if I'm saying thousands that basically means many books okay I know some people will say sir how come thousands are in my whole life I did not learn thousand okay many books okay many books so once he or she or this person learns from many books spends those 4 plus one year one year with internship so internship knowledge is also going to come over there now what is the final aim this becomes becomes a this person becomes a doctor so let's consider this is my chat GPT with doctor do chat GPT okay now tell me if you go and ask this doctor any question any question related to any medical problem generic medical problem generic medical problem will you be able to get the answer will you be able to get the response yes yes or no now is it necessary the doctor will say only with accordingly to the book no it can create his own answer you'll say that hey I'm feeling I'm not feeling well you know I'm I'm having this kind of symptoms the doctor will come up with his own word because he has all the knowledge from all those books all those experience that he has put in his internship all all the people he has actually treated in those five years right it will be able to provide the response right so what what what is this doctor right now can I say this doctor can act like an llm model now large language model who is an expert in medicine who is expert in medicine right this is just like a large language model who is an expert in medicine and this is what recently openi is trying to do right what is open trying to do over here openi is planning to come up with something called as GPT store have you heard about this GPT store GPT store basically means what you can now create your own llm models and train it with your own custom data right and on the go you can create this particular app right in open that option is already there right I have also tried it out and it works absolutely fine I will tell those model how it has to behave now here you're spending 4+ 1 years and you are becoming a doctor now this becomes an llm model who's an expert in medicine now the next step of this doctor is to become an MD now there are some questions which this doctor will not be able to understand which the doctor will not be able to give the proper answer it may give you a generic answer so what we need to do we need to train this llm model again with more data and this time the specialization right you want to become an MD in cardiology you want to become a MD in Oro you want to become a MD in some other field so that expertise will again G when this person will be trained with more three to three two to three years of books right along with with experience where You' given those kind of task I hope you're getting it right guys I'm trying to use many more examples that is the most important thing the more examples you see the more well you'll be able to understand so at the end of the day what this doctor is doing it is able to generate its own response based on the problem statement it sees yes based on the problem statement so this is how we are trying to learn it tomorrow all you have to do if you're working in any business in any companies tomorrow what you will do you will take any model you can f tune with your own data set and that particular model can behave accordingly based on the company's use case at once right so this is how things goes ahead right so if you have understood till here please give some thumbs up sign I hope everybody is able to understand please give it a thumbs up say something Krish I'm happy I want to see some happy faces please do hit like please do make sure that you subscribe the channel and I want from every one of you you have to share these videos everywhere right we are trying to democratize AI education over here everybody should know the importance of AI because tomorrow trust me you going to use it somewh or the other way right anywhere you are going to use it no one is going to say that you cannot use it you have to use it right many people will say hey there is no job why use it in your personal personal day-to-day activities and don't worry about job if you're good at something whether you are from any technology you will be able to get jobs all you have to do is that have that knowledge right if you able to have that specific things trust me it is very good easy to learn and it is absolutely when you also try to convey this information to someone right then you'll be able to understand that how important all this technology is tomorrow in a company a business use case is getting solved and you provide a solution wherein you don't have to spend much money in those use cases right the people will keep you instead of anyone right and they'll give you most of the problems to solve right so in this way so please make sure that you hit like share with all the all the friends some or the other way someone it may be helpful for anyone who will be learning over here okay now let's go ahead to The Next Step where here we have understood about generative AI so what is the main aim of generative AI the main aim of generative AI is to generate some content now let me talk about some of the use cases right so use cases I will be talking about and use cases we will discuss with respect to both techniques one is discriminative technique discriminative technique whenever the name comes discriminative it is going to discriminate based on the data it will give you some kind of output right some classification problem regression problem something right so first technique is nothing but discriminative technique in this discriminative technique let's say I'm taking a use case I have a data set which which says types of music types of music so here I will try to create a discriminative ml discriminative Model discriminative D model and this work will be to basically classify whether this music belongs to rock whether this music belongs to classical or whether this music belongs to romantic right so this is basically discriminative technique right now coming to the next one which is basically called as generative technique generative technique let let's consider I have a music again same use case only we'll try to do let's say this is my music okay it looks like a hard bit but I'm considering it as a music and this music I will train it my my generative model how the training will happen I will talk about it so let's say this is my generative model and now the generative model will task is to basically generate a new music this is just one use case of generative AI I'm not worried about whether it is large image model large language model and all I'm just showing you with respect to do same use cases what discriminative models will do and what generative models will specifically do right so in short we are generating new content this is super important this is what this is new content clear everyone happy yes everyone just give me yes or no if you able to understand this things yeah so till here everybody's clear I hope you got an idea with respect to discriminative and generative technique okay now is the main question how llm models are trained now you'll understand this okay guys don't worry Lang chain llama index I will teach what exactly it is okay how llm models are trained just wait B till the end of the session you'll understand all these things right and once you understand it it will be very good amazing you'll get a clear idea and that is what is my Target today tomorrow if somebody ask a question related to generative llm models you should be able to understand it okay perfect now how are llm models trained so let me just go ahead and use one open- source model llama 2 paper Okay so llama 2 is a model that is that is generated by meta okay So Meta has trained this model and this is the research paper okay this is this is the research paper the reason why I'm showing you this research paper because based on this model only I will teach you how this model may have also trained okay yeah yeah this video will be available in the future in the YouTube in the dashboard along with all the materials that I'm writing that I'm showing to you okay so don't worry focus on the class now over here see there are three important information that you can see from this content okay one is the pre-training right it talks more about the pre-training data it talks about the training data details and it talks about Lama to pre-trained model evaluation the next one is it talks about finetuning see fine tuning here we are going to discuss the supervised fine tuning please remember this word okay supervised fine tuning super important super amazing technique altoe and I will break down this technique and make you understand how training usually happens everything will be taught in this session then the third one is something called as reinforcement learning with human feedback R lhf please remember this techniques because same technique is also used in chat GPT models supervised fine-tuning reinforcement learning with human feedback along with there is something called as reward system also which I will be discussing so the reason why I'm showing you this research paper because this research paper are very easy to understand if you have some prerequisite knowledge about Transformer about something about some accuracy concept some performance metrics concept if you know that much that will be more than sufficient okay so let me go ahead and show you so if I go to the introduction see large language model that is talking about this this this Lama 2 now llama 2 has been it scales up to 70 billion parameter okay there was three specific models in Lama 2 which we'll discuss uh it is with respect to 7 billion 13 billion and 70 billion parameters here I am just trying to show you some important information and based on this only I will teach you okay now let's understand this so this is how entirely it happens you have pre-training data you have self-supervised learning you have Lama 2 you have sft supervised fine tuning you have rejection sampling proximal policy optimization because everything will be taught this is nothing but reinforcement learning with human feedback and based on this particular feedback we assign something called as safety reward model and helpful reward model everything I'll teach you don't worry just see the diagram focus on the diagram and try to just see this okay okay over here so every component that you're seeing I will break it down and I'll explain you now where does this model take the predating data from so here you can probably see our model right is everybody able to see this when we say it parameter train from 17 billion yes so everybody's able to see this yeah so on pre-training data includes a new mix of data from publicly available sources so from where they have taken the data from publicly available sources which does not include data from meta products or Services okay we made an effort to remove data from certain sites known to contain a high volume of personal information about private individual so from where it has taken the data in short this is all lie I guess they have taken the data from wherever it is they are saying we made an effort they're saying we made an effort to remove data from certain sites effort you know how much effort it is there okay so understand okay efforts then we trained on two trillion tokens of data as provides a good performance cost trade up so two trillion tokens of data it has been trained in okay so here the next thing see see see see see we adopt most of the pre-training setting and model architecture from Lama 1 we use the standard Transformer architecture they by Transformer architecture see tomorrow if you give me a chance I can also create a I can also create an llm model creating an llm model is not very difficult but the main problem will be cost of the GPU how much cost of the GPU it will take what should be a team size to do reinforcement learning over there everything in that particular thing that cost will be doing so you'll be able to see only big companies can only afford all these things who have billions and billions of dollars in fundings and all tomorrow if you say whether I can also do it yes the answer is you should have just money to do it because you required those huge gpus the gpus cost how training time it will cost how much data you require they will you'll also require people for working for you who will be doing that annotation task labeling task indexing task reinforcement task right but for this you require a huge amount of money tomorrow if someone comes and say hey take this much money create your own model we can do that no worries right but in India we don't Focus much on Research right we focus much on we focus much on what solving business use cases and trying to earn Revenue out of it okay research I have not seen much companies who are doing research that much okay so in infrastructure cost is there so see Transformer architecture so if anybody knows about Transformer architecture done you'll also be able to do it apply pre-normalization some techniques will be there code will be available you can also do it okay now if Lama 2 is an open source you can also use the same code and try to do it okay then we trained using Adam W Optimizer see these all videos I've already created explained you like anything what Adam Optimizer how does it work this this everything is Basics I'm not not teaching or I'm not showing you anything new right beta 1 is there beta 2 is there this is there we we use a cosine learning rate what is cosine learning warm-up step DK final running rate everything is same nothing new it's like build sand sand sand and make a castle okay I have sand I have bricks I will combine them and make a uh make a festar hotel in short right and everybody cannot make a festar hotel right who has money they can make it who has money they can make a huge Bungalow right a Maharaja Palace something right they can do that so I hope you're able to understand all these things you need to have money for that okay so here are there Lama one had come up with 17 billion 13 billion 33 billion 65 billion now Lama 2 is coming up with 7 billion 13 billion 34 billion 70 billion now why this billion is increasing why this parameters are increasing some fine tuning will be done more data will be added more data will be included more reinforcement will be done multiple things will be put up over there and that is how your parameters will increase and there is no other way the parameter is not going to increase over there right parameter will increase over here itself right something you do in that more parameters will get added more weights more bias it's all about more weights and more bias okay less Dropout more Dropout more normalization less normalization that that way only parameters are get get added you may be thinking parameters is getting added I think they have put a rocket launcher inside that model no nothing like that just they have added more data set maybe more fine tuning techniques and because of that more weights more bias are getting added that's it right don't think that no something is happening the model will now go to Mars no nothing like that okay so this is what is all about Lama 2 okay now this is my training loss you have seen in many many videos in deep learning how the training loss will be shown over here right so training loss is over here see training Hardware we trained our models on meta resource super cluster meta resource super cluster okay by this name only gpus both clusters use Nvidia a00 let's let's see what is NVIDIA 00 cost let's see okay Nvidia 800 powering many of this application this is just roughly $10,000 chip just $10,000 chip just imagine see 27 L 27 lakhs dollar is NVIDIA Amper 800 who will be able to do which startup will be able to do this much money will be able to invest this much money tell me the reason why I'm showing you this because the research paper talks more many things about it right so over here they have used Nvidia a00 you have seen the cost of it amazing right how much is this cost 27 lakh I guess sorry 27,000 and more chips if you try to put up more chips over there the cost will keep on increasing right we are still we are our laptop has RTX 490 that basically means we are our laptop is very powerful there will be electricity cost involved there will be multiple things involved right so everything is over here you can probably see with respect to this right it is somewhere around 27k sorry not 27 lak it is 27k as as I just saw 0. I did not leave that part okay so but you can just understand the cost is keep on increasing okay so here you can see that RSC uses Nvidia Quantum Infinity band water product cluster is equipped with Roc you can probably see this see see see CC CO2 emission during pre-training you have to also give this information if you want to publish the research paper total GPU time required for required for training each model power consumption P power Peak capacity per CPU device for gpus see how much carbon is emitted right 7B is this much power consumption 400 watt 400 watt 350 watt 400 watt total total GPU hours take 33 lakhs 30 1,000 no no 33 lak 11,000 hours GP hours who has this much time guys if a startup in India will spend this much time in training done I don't know this is how many years let's say 24 into 12 uh 24 into 365 just do how many hours it will be there so how many years it has basically train right carbon P for print pre-training and all these information are basically then right and then here also you can probably see the comparison size Code common sense reasoning World Knowledge reading comprehension math mlu MML is basically human level understanding uh BBH and AGI right now I have told all this information now let's understand how these models are basically trained how llm models are trained okay so till here everybody happy yes everybody happy with the teaching that I'm actually doing so now we are going to move to towards how llm models are trained and we will discuss it step by step so guys clear or not clear or not just tell me give me a quick information so here I'm going to basically write the stages of sages off see stages of training so first information here I specifically have I will just draw the stage one so this is my stage one based on that research paper I'm basically going to draw okay so this is nothing but generative pre trining okay generative pre-training second stage so second stage is nothing but supervised fine tuning which we also say it as sft the same information what is written over there the research paper same thing I'm writing third stage third stage is what reinforcement through human feedback this is my third stage okay so initially in this stage in generative pre-training we give huge data so this can be so any any llm model basically takes internet Text data or any document Text data in PDFs in all all those formats and here we specifically create or use this generative pre- now generative pre-training basically means here specifically we use Transformer architecture model Transformer of bir architecture model the outcome of this is what the outcome of this is the outcome of this is we basically say it as base let's say if I probably consider if I probably consider so I will write this is my base Transformer model what is this the base Transformer model okay now this base Transformer model is then base Transformer model basically means whatever Transformer I basically trained on I will basically say this as base Transformer model okay now the base transform model is in turn connected with supervised fine tuning because same model will be taken and supervised fine tuning will be done on top of it okay top of it right now this understand this base transform model will be able to do various task like text classification text summarization multiple things it will be able to do okay now here only we will not keep it in case of uh llm model we will take it to the next step the next step is supervised fine tuning now here what we are specifically going to do we are going to use human trainers also we are going to involve human trainers to put some kind of conversation some kind of conversation and here we will create some more custom data this is important to understand here we will try to create some more custom data right so some more custom data will be created in this case in this particular step and those custom data which is created it is created basically by whom by by human trainers I will talk about what exactly is human trainer when I probably Deep dive more into it okay then it based on this custom data we will train the specific model and the outcome of this model outcome of the model will be okay just a second oops it got closed let's see whether it is saved or not I hope so it should be saved oh my God okay apologies the system got crashed I don't know what happened because of that the entire material got deleted sad can't help okay so how much content I had actually written I don't know whether the system got crashed or the scribble notebook automatically got deleted sad to hear about it but it's okay I don't think so anywhere it is okay I don't know generative AI the materials got deleted I'm extremely sorry I don't know what happened over here but I'm not able to see that materials got deleted yeah okay no worries anyhow you'll be able to see in the recordings so don't worry about that uh let me continue okay let me continue okay okay now let's go step by step I was just talking about some important things over there so first step I will go with respect to stage one okay so stage one generative pre training okay this is basically my stage one now what all things we basically discussed in this okay in generative pre-training what we specifically do is that we use Transformer architecture okay so here what we are doing we basically use Transformers super beneficial for NLP task and then along with this we take Internet Text data and document Text data so this is nothing but internet Text data and document text Data okay and this is what is my stage one okay stage one now once we train with this specific Transformer we what we get we get base Transformer model we get base Transformer model now what this base Transformer model is basically is Cap capable of right what this base Transformer model is capable of you need to understand this specific thing okay this base Transformer model is capable of doing task here I will write down all the task number one text summary I will save this saving this is always better so that if it gets deleted I can open it so the task which is able to do is like task text summary sentiment analysis third task can be something like text uh word completion I'm writing some task fourth task is basically like text translation so all these things it will be able to do it all this task this model will be capable to do it but what is our maining when we make sure that we have a generative AI our expectation is basically to create a model which will be able to do chat and conversation right this is what is our main name right but what we have achieved we have achieved this right by using this technique we have achieved this but what what is our goal our goal is to achieve this right this is my goal so that is the reason we just don't stop in stage one we go to next stage that is stage two now in stage two see stage one it is very much simple we have huge amount of data we make sure that we do that labeling whatever is required we train it with the Transformer we create a base Transformer model this base Transformer model is able to do this thing but it is not able to do this but this is our goal goal of generative AI is this one right this is what is our main aim goal of generative AI right this is what a generative AI does agree everyone this is what generative AI does and this is what is my goal agree or not everybody do you agree if you agree please do make sure that you hit a thumbs up okay now to make a generative AI on top of this I need to do some more more thing and that is where I go to my stage two so this second step is basically my stage two what exactly stage two now what exactly stage two stage two I've already told you it is nothing but from the research paper also I've told you it is nothing but it is basically supervised fine tuning which we also say it as sft now what exactly supervised fine tuning what exactly this is this is the second round now in supervised fine tuning what happens now see this is the most important step okay we require humans in this step humans now in human what we do we create request we make some set of people sit over here so this will be my human agent this human agent will send some request just like in a chat bot how we send it and based on this request based on this request we generate an idle response and this idle response is given by another human agent it is just like a chat conversation let's say I have I have I've set I have made one person sit over here one person sit over here when this person asks a question this person will answer the question then similarly next request will be created then next response will be created then next request will be created then next response will be created so what is basically happening this human agent is basically giving the request this human agent is basically giving the response right idle response when I say idle basically means whatever is the question based on the question we are giving some kind of answers this way we will set up our sft training data set so this will be a label data set now this data set has what this is my request this is my response this is my request this is my response this is my request this is my response this is my complete data set yes or no this is my data set that we are going to create from this process request and response request to response request and response whatever these human beings have had a conversation with right now we going to take this data set and further send this data set to our base Transformer model base Transformer model along with this we will do some fine tuning or for we'll use a Optimizer let's say we have using adamw Optimizer this Optimizer was done is in the Llama right in the Llama itself right and then finally I get a sft Transformer model why Optimizer is used to reduce the loss this is an Optimizer right this is specifically an Optimizer okay everybody clear so this is the step that is basically happening in the second one sft is done by real human being human agents right and that is how things are going ahead right and this way you are able to create your own data so this will basically be my data or labeled data during the sft process okay and the same data will be used to train your base Transformer model after training you will basically create a SF Transformer model okay now what will happen still this model you'll be thinking okay it'll be able to give me more accurate result but still this model will be facing hallucination it may not give you good correct accuracy because there may be also some kind of request and response with this model may have never seen it okay so for that case what we need to do we need to probably go with our next step or stage three and that stage three is specifically called as where we will be using reinforcement okay and that step is basically stage three in the stage three we use reinforcement learning through human feedback because we also need to do human feedback and without this reinforcement learning this model is probably it will face hallucination it will make give you rubbish answer and all okay now what happens in reinforcement learning let's discuss about this okay let's say this is my sft trained model okay so this is my sft Transformer model now in this sft Transformer model what happens after training whenever a human gives any kind of respon request whether a human gives a request after the model is trained we can get a response from from whom from sft chatbot right we'll be able to get some kind of response thef Transformer model okay now what we are going to do now based on this request I may also get multiple response now that is where you'll be understanding reinforcement okay let's say this sft chatbot we will try to record its multiple response let's say this is a response a this is response B this is response C this is response D and this is multiple response like this okay now once you probably get this response so let's say this is my response a as said this is my response B I said this is my response C this is my response d right multiple responses there okay now for this response a human being will do some ranking and this is where reinforcement is applied ranking okay now this ranking of this response like for this request this should be given first rank that basically this should be the idle response this should be the second idle response this should be the third idle response like that a ranking is given by another user agent another user agent okay see this step by step First Step then Second Step then ranking is done okay and what this specific ranking is basically going to do just imagine this okay ranking is just going to say that my response a rank should be greater than response B rank should be greater than response D let's say d is greater than C okay so this ranking will get applied okay and once we specifically assign this kind of ranks these are my ranked responses what we can do we can train after this what this is done is that we train a fully connected neural network fully connected neural network in this neural network let's say these are my nodes like this and this is my output like let's say like this so here my inputs will be my conversation history my conversation history and my outputs the real outputs are my ranks ranks responses so based on this I will be training my entire neural network and this model is basically called as reward model okay so in this step in reinforcement what are specific things we are doing we creating an SF Transformer model based on multiple responses we are applying reinforcement where we are giving a human feedback so here in short we are giving a human feedback human feedback based on this human feedback we will be specifically getting which response should be greater than the other response we assign a rank and then we create a fully connected neural network with conversation history and ranks so that based on this ranks we will be able to provide rewards rewards to what this Transformer model I hope you're able to understand yes yes yes everyone yes if you're able to understand hit like please make sure this is the most important thing in generative AI right of creating this entire llm models right and trust me to understand these things because after understanding this reading research paper will be very very much easy okay and that is where my reward model is basically created in my stage three okay so this is the most important thing okay I will use one image to show you the next model okay and this is the most important one um just a second everyone just a second everybody I think think my system is hanged okay okay till then let me go ahead and continue it okay so finally after we have this entire reward model and all okay we also make sure that we create some kind of models see at the end of the day once we create all these things that basically means what happen this three steps helps us to create any llm model as such what is the difference thing that is basically going to happen right your training data needs to be created right the more the training data the more better thing is second thing is the reinforcement learning reinforcement learning with human feedback right this is also important coming for the third thing the finetuning part right the fine-tuning part specifically with respect to sft what kind of request and response the human being are taking and reinforcement the most important thing is that how the ranking is done right these are the main things and obviously the architecture that we are specifically going to use over here is nothing but Transformers okay so this is very much important with respect to all the things that we have discussed how was the understanding scenario guys with respect to all these things have you understood or not please do let me know please do let me know are you able to understand everything or not with respect to whatever things we have actually done or discussed over here just let me know guys got it got it yes yes yes yes yes so everyone is giving me a right answers over here great great great great great great now going forward what you really need to focus on okay what you really need to focus on see as a person who is interested to get into generative AI okay what are things you should basically focus on the road map if you really want to start the road map to generative AI road map to generative AI okay now in order to understand the road map of a generative AI or how you can also start the prerequisites prerequisits what are the prerequisite obviously one programming language okay one is python okay second you really need to be strong at NLP so when I say NLP machine learning concepts with respect to NLP right where you learn different texes of embedding techniques let's say what is embedding here you specifically learn how you can convert a text into vectors right now converting a text into vectors has many things in mind okay so guys there is also one stage uh that is after this okay probably I will explain you that because my another screen have got stuck okay so what I'm actually going to do is that probably one more thing is something called as proximal policy optimization I will create a a live video on this next week we basically say this as prox let me just write it down for you after creating the reward model we basically use this in proximal policy optimization we will discuss about this for this I will will come next week live or probably in whatever next live session we will discuss about this entire thing this is an another important algorithm Al together okay but after this our final llm model will be cleared and this is super important because this will assign rewards this is responsible for assigning rewards based on various responses that we are giving or my llm model will give okay so uh I will probably con uh cover this because this is another long topic uh in the upcoming classes we'll see any live sessions we'll discuss about this also okay now let's go ahead and understand the NLP now as as I said that right the prerequisite is that in machine learning you need to understand how a words are converted into vectors and they are multiple techniques uh I hope you have heard about bag of words you have heard about tfidf you have heard about embeddings you have heard about uh word to right word to V so all these techniques are specifically used in converting uh the words into vectors so that the machine when it is trained based on input and output it'll be able to understand the entire context so basics of machine learning I still say this as basics of machine learning you really need to have a good amount of idea with some of the algorithms knowledge and all third thing when I say you really need to understand deep learning techniques also in deep learning you need to understand about Uhn hown Works what are optimizers what is loss function what is loss function what is uh let's say what is uh overfitting right uh what is activation functions what is multi-layer neural network what is forward propagation backward propagation so many different top topics are there so these are again the basic building block the basic building blocks okay so the basic building blocks with respect to all these particular topics is super important so please make sure that you have to be really good at this I'm not saying that someone cannot directly jump to generative AI they can if you are a developer if you are developing some kind of application without knowing all these things any one programming knowledge you can direct go ahead and probably use the API consume it build application but these are for those people who specifically wants to work as a data scientist as a generative AI engineers in the companies right for them they really need to follow this without this basic knowledge they cannot probably learn generative AI why I'll tell you if you directly jump to generative AI you may be able to develop application but when you go ahead and with the interviews there people are going to ask you basic things right right basic things over here and if you are not able to answer that they'll not directly start with generative AI first of all they'll see how good your basic skills is if you good at something then only they'll further go ahead and ask some more questions right so it is super important to understand you cannot directly jump it jump into things okay so the fourth topic that you will probably be seeing after deep learning uh is Advanced deep learning techniques so here we focus on on RNN lstm RNN Gru so all these neural networks you really need to understand Gru uh encoder decoder encoder decoder Transformers as I said Transformer attention is all you need all these architectures you should be able to understand because in the interview again they are going to ask you this they'll tell you that design or write a code on a basic Transformer okay and they'll tell see how things are basically done whether you are able to write it or not all those information will be basically asked in the interviews because everything with respect to generative AI is built on top of Transformer right now the fourth Thing Once you this I usually consider as a prerequisite to get into generative AI right it's okay it's okay if you have some good some basic knowledge on all these things right but it is always good to have this so that you will be having an indepth knowledge in-depth knowledge of working in generative AI okay then coming on to the f part right here where I'm going to focus on different different libraries open AI openai has come up with this gpt's model right GPT 3.5 GPT 4.0 GPT uh 4 Turbo all these specific models you can use to develop applications llm applications not only this you can also use other Frameworks like Lang chain Lang chain is quite popular right now because many people are using this to create llm application and the best thing about Lang chain is that it has created this framework in such a way that you can use paid apis also you can use open source uh open source llm models also and you can perform any task that is basically required along with prompt engineering there is one more framework which is called as Lama index so llama index is also a very good framework and this is specifically used for quering purpose quaring vectors right so this also is very important framework right now and as you all know right now Google gini jini has basically come up with this amazing model Google has come up with this and right now jini Pro is available so you can also use gini Pro it has its own libraries and you can specifically use for performing any llm applications right now we don't have the documentation of how finetuning is done but in some days that two will also come right now in all these libraries all this open source open source as I said right open source models llm models in this open source models also you can also do fine tuning but again at the end of the day for fine tuning you really need to have huge gpus it's see open source models are readily available you can directly download it you can quantize it you can make it in a form where it will be of less size you can directly find tuning with your own data set for that you require you gpus for inferencing purpose also you require good machines in short right so in short if you are good at all these things trust me you able to work with generative AI but again it is a process where you have to probably learn all these things okay in the future we'll also try to uh I'll try to take a session where we'll discuss about all this prequisites in depth and we'll try to understand all the mathematical intuition okay CNN is not at all required see CNN is required if you are interested in large image models but here most of the US new cases that are probably coming up are on large language models right but if anybody's interested in this you can learn about CNN if you want but I feel uh if you are interested in Tech side llm you have to focus on all these things right so guys how was the session Al together good enough good or not oh great just a second I will take up questions my screen has got stuck okay so let's take some questions till then uh yeah every week okay great you are able to hear me out so please uh let me know about more things how was the session if you liked it please make sure that you like it guys uh it takes a lot of effort to keep this kind of sessions and uh we are planning for every Friday this sessions so it'll be amazing to teach and all it'll be great okay got something new to learn great amazing sir salute valuable great great great great I hope everybody's happy so uh please make sure that you share it with your friends in all all the platforms that is specifically required because trust me at the end of the day these all are free content our main aim in I neuron is to democratize AI education we so at the end of the day please try to learn in that specific way try to understand these techniques and then try to build application okay can a non-developer also learn this yes anyone can learn this anyone okay anyone because because it is very much simple with respect to coding okay what is the boundary of sft and interface for quering is only sampling so tell us about the course you're launching on generative AI on in neuron so guys uh we are launching generative AI course it is probably from next month you can find all the details in the description of this particular video or visit ion. a page okay there generative AI scores is basically coming up uh So based on that uh you'll be able to see to it and check it out okay check it out in the description of this particular video so video recording today's class yeah it'll be available in YouTube it will be available in the dashboard that is given in the description okay okay perfect so hit like guys any more questions any queries hi sir can you tell me about the differences great sessions are really and really appreciate so guys just give me a 5 minutes break and then we will be taking up the questions my system is hanged so I will restart the system till then okay so just give me another 5 minutes break uh we will go ahead and take a 5 minutes break so Prashant uh you can just uh stop sharing if possible I will just take five minutes break and okay and I will be talking about that thank you [Music] [Music] [Music] n e e e okay am I audible am I audible hello hello okay 2 minutes 2 minutes 2 minutes okay audible right perfect sorry okay so let's take so first of all people were saying about what is the differences between generative AI in German Pro okay so generative AI as I said guys large language models are a part of generative AI similarly large image models are also part of generative AI okay so generative AI is already a subfield of deep learning our main aim is to create new content based on the data that we have trained right so we have all these kind of llm models okay okay let's let's take this questions so great session sir really helpful and really appreciate your initiative of democratizing gener uh generative AI learning thank you so going forward all ml or DL techniques will not be in use we will focus more on llm plus finetuning yes it depends on companies to companies right so if there is a company where we are focusing on creating use cases quickly and they don't have that cost issue they can directly use this because see at the end of the if you're also creating any application with respect to machine learning or deep learning you have to do everything from scratch yeah okay let's take more question so how much large data set of request and response is created by human agents under sft as manually to create such large data set is impossible yeah if they put 100 people every day that many task is there then just imagine how much data will be able to create right huge amount of data you'll be able to create okay what all task we will perform from gini Pro everything text summarization Q&A document text uh document Q&A embeddings everything is possible right so one session I also I'll plan for germin pro okay why focuses more on llm in gen AI because you're able to do task you're able to create solve business use cases in a much more accurate way right so it is very good how can generative AI be used for solving real life business problem there are lot of real world business problem that is specifically required by companies from chatbot to text summarization to document classification to everywhere it is specifically used uh in in inur also we are trying to automate the entire support system along with human intervention both we are trying to include and over there also we will be using LM models too right for assignment generation we are planning to use llm models many as such okay so in N also we have built our own models itself right so can you show how to finetune a GPT model using API yes it is possible but uh again we need to make sure that we have some good configuration configurable system uh if you want to do it with open source llm if you want to go with paid that also we'll try to do it in the upcoming sessions okay okay tell us more about generative AI so here is my page I'm going to share my page over here ion website so if you are interested you can go ahead and so I'm going to share my screen okay so I hope everybody is able to see my screen please uh give me a confirmation can you see my screen give please give me a confirmation so I will just try to answer this specific question okay so here you'll be able to see as soon as you go to the homepage the first course that we are launching on generative AI mastering generative AI open AI Lang chain and llama index also from 20 January 2024 okay this will be a 3 to four months course alog together one year dashboard access is there this is for everyone out there whether you are a college student working professional along with this we will also be providing you access to the Virtual Lab of uron okay here what all things we are going to learn we are going to master everything that is related to open Lang chain and Lama index okay and specifically developing application end to end till deployment okay we will show you multiple things over there now when this batch is starting 20th Jan 2024 the language is English 5 months duration 10 to 1 p.m. Saturday Sunday class timing it will be live instructor lead okay uh you have onee dashboard access assessment in all modules the reason why we are putting one year dashboard access is that because this content will get upgraded every six months I guess right there are a lot of upgrades in the field of generative AI right so that is the reason there is no use of giving you lifetime or anything as such okay so neural laab access is there dedicated Community Support there's all things are there mentors will be myself sudhansu s Savita and bppi right right so some portion I will be taking some portion Sanu will be taking some portion SI Savita will be taking some portion BM B will be taking okay and you have already seen they have they're doing the live sessions on generative AI in the in the in in in the YouTube channel of uron itself okay so you can definitely check it out over there then if you have any queries you can talk to a counselor or you can also contact the ivr number that is given over here right in the website itself so if uh any question that you have regarding counseling anything and and this is the entire syllabus we are going to start from Basics what road map I have shown you today based on that road map only we are going to start see bag oford TF ID word to F test engrs Elmo bird based right then large language model what is BD GPT T5 Megatron right gpt3 3.5 how chat GPT is train introduction to chat GPT 4 right and then we are going to probably learn about hugging pH we are going to see different different models open source then we are going to talk about llm power application we're going to create end to end projects then we are going to use open aai this this this all every everything that is available in open AI because many companies are also using it then we are also going to cover prompt engineering Lang chain Lang chain completely L chain in depth we'll try to complete then we will also be completing L Lama index all these Frameworks right so everything will be covered up and finally you'll also have lot of end to end projects in every section lot of end to end projects is there and these all projects are with respect to deployment so all these things are there you can probably check it out in the syllabus okay uh all the information will be given in the description of this particular video as I said if you have any queries talk to the counselor okay they'll help you out what Hardware is required to learn gni no need of any hardware we will be in nuron lab you'll be able to execute all your code you'll be able to do it if if anything is required we'll let you know in the latest stages okay but whatever is in the flea platform available in uron Virtual Lab and all you'll be able to do this so please tell is there any prerequisite yeah Python programming language so for that also we are giving you pre-recorded videos so python you should know only python you should know remaining all is fine Genera Community Edition new launch difference please uh Community session is only up to some level okay you can probably say 100% of what we are teaching over here it is hardly 10 to 15% okay will we require opening API key yes we will show you a way how to do that okay um but yeah at the end of the day if you want to do fine tuning and all you'll be requiring open API key so do you have projects Hands-On course for machine learning and data science so again I'll let me share my screen for that also we have launched it so let me share my screen so for project Hands-On course uh if you probably go over here we have also launched this one which is called as production ready data science project so if you click over here production ready data science project this is a one month course where we are solving end to endend five projects five five and it includes machine learning deep learning natural language processing and generative AI so this is completely end to end and this is with mlops machine learning operations right all the tools the timing again this is from 27 Jan the timing is 8: to 11:00 p.m. Monday Wednesday Friday so in one week we will be completing one project okay three days and it'll be live all the sessions will be live it is live instructor lead so again you can go to ion. a page check it out if you want to talk to the counselor talk to them mentors again all these things s Savita bapi will be the main mentors over here who will be taking this entire session Monday Wednesday Friday will be the session 8 to 11: at night now here what we have done is that best thing we have included mlops everything that is basically required right mlops mlops mlops right let it be so what all things you'll be covering in this open a AWS GitHub Docker azure langin junkins along with this we will be seeing Circle CI we will be seeing uh GitHub action cicd pipeline Dockers kubernetes everything that is required is covered in this so it is a complete mlops syllabus right DVC dockerization AWS Junkin cicd pipeline so every project that you'll be seeing right you will be seeing over here we are using some of the other things let's say Industry Safety here also we'll be doing dockerization AWS GitHub action cicd right and and if I go with name entity here you'll be seeing DVC dockerization Azure Circle cicd so everything will be covered with respect to that and then I've also we have also included uh the generative AI project okay great so I'm stopping and anything any info that you require you can probably go ahead and ask in the uh just contact the ivr number over there okay okay perfect so how was the session all together did you like it so do we need to do projects on mldl to get job in gen aai yes obviously mlops mlops mlops see the generative AI projects also that we are going to do in gen AI course there we are going to include a lot of mlops activities also it's more about creating applications okay okay perfect course fees and all you can find out in the course page itself okay perfect guys so thank you this was it I think we have completed the 2hour session uh from coming Monday we are also coming up with the mlops community series from coming Monday so please make sure that you subscribe the channel press the Bell notification icon that is super important um we will be starting from next week itself okay you can probably check it out uh all the reminders everything will be found out in the channel itself there will be dashboard exess materials everything that you actually require so thank you uh this was it from my side if you like the video please make sure that you hit like subscribe share with all your friends this was it from my side okay and I will see you all in next week Friday session we will be be discussing more things but again we have lot many things that are coming from Inon itself we'll be having mlops entire Community session and it is from uh next week uh Tuesday is going to probably start and we'll be discussing about all those things how an end to-end project is basically created Let It Be An NLP project machine learning geni project how mlops can be used and many more things so thank you uh have a great day and keep on rocking if you like this session please do hit like and yes I will see you in the next session so thank you guys have a great day bye-bye take care and keep on rocking thank you bye
Info
Channel: iNeuron Intelligence
Views: 9,132
Rating: undefined out of 5
Keywords: ineuron, ineuron generative ai, ineuron intelligence, ineuron ai, gen ai, gen ai full course, generative ai full course, krish naik generative ai, krish naik, i neuron, generative ai webinar, generative ai 2024, generative ai explained, generative ai roadmap, all you need to know about generative ai, generative ai revolution, generative ai model webinar, gen ai webinar, gen ai revolution
Id: PoKwTzmrAts
Channel Id: undefined
Length: 117min 28sec (7048 seconds)
Published: Fri Dec 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.