Can We Learn Generative AI With Open Source Models- All Alternatives To Open AI Paid API's

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello guys so from past couple of months I have been uploading a lot of videos uh related to creating generative AI applications using llm models L models multimodel and uh where I've specifically used both paid and open source models itself uh one of the important question that I usually get from people is that uh chrish do we really need to have an open AI account or do we need to probably put some credit over there uh we need to buy some services in order to learn generative AI so I will be talking about about this in this specific video um the answer is no okay you can actually learn the entire generative AI just with open source models uh again there is an asterisk uh because I will talk about like what all things you can probably explore for exploring the open source model and creating generative AI applications over there and uh some of the myths that you have probably in your mind related to generative AI or how I specifically prepared with all these particular open source models and all right now I know each and everything like I know hugging phase how to explore hugging phase I know Frameworks like Lang chain L index I probably have explored some of the amazing platforms like cre AI gini Google gini models right uh how to access open source model using Gro API everything I will be showing you what all platforms I specifically follow let's say that if you have that monetary concern to probably get an paid API uh then you don't have to worry about it don't go and focus on paid API instead focus on open source llm models so let me just so go ahead and discuss about this in this specific video the first platform that I definitely refer for open source models is hugging face right now there is one challenge in hugging face see I'll tell you why many people prefer paid API accounts because uh uh understand one of the very important thing while creating generative AI application the main problem statement over there is with respect to inferencing inferencing basically means whenever I ask a question to my llm models it should be able to give the response quickly and if I talk about many businesses in this entire world wants to develop application where their influencing is very much fast right there also some and those type of organization don't worry about the cost that is involved over there right because they are already doing some amazing Revenue out there in the market but what about those startup who really want to begin from scratch right there are options that you can actually directly use open source llm models like Lama 3 and all which is also used for commercial purpose and then you take care of the deployment and influencing purpose right so when I say with respect to learning generative AI uh the main thing is that how you can actually use l models how you can perform fine tuning and all that is the most important thing that actually matters right and even in interviews if they see all the skills trust me you have a good chance to get hired so first thing first platform that I definitely refer for both using llm models quantization process you know and even fine-tuning is hugging face almost each and every model that you'll be able to get over here let it be image text to text uh like in in this example we can see multimodel in this example we can see computer vision in this example we can see natural language processing in this example you can see audio tabular so any kind of models that you specifically want these are like kind of state-of art models which has been created or trained with huge amount of data so you can also use this specific model only one challenge that comes over here let's say that I'm actually using Google collab or whatever uh coding environment that you're specifically using right you really need to have a powerful system you need to have enough RAM let's say that if this metal Lama 38b is 8 billion parameters is there if I try to download it in my local machine right now in my local machine I have somewhere around 256 GB hard dis 64 GB Ram right I will be able to download this right and I will be able to work with my code but what about those people who are just using some kind of laptops so this is one Challenge and that is where just for learning purpose Google collab is an alternate option now if you consider Google collab and if you try to download this particular model and do it most of the application will work but again with respect to the Google collap free account also you hardly get 12gb Ram Max to Max and some GB of hard disk right so there also you need to probably get a paid version which takes somewhere around $1 okay but still most of the models over here whenever you probably want to read it you want to probably go ahead and see the end points by using Transformers let's say that I have this particular I want to use this metal Lama 3 I will be able to use this model through Transformer I usually get the entire code and this is the code that I will specifically use I will go ahead and install the Transformers and I'm happy to use it if you go down more detail about it like how should we keep our prompt and all what should be the pipeline that we should create every option is basically given so I still believe hugging face is the best option to learn generative AI models and all right now let's say that in Google collab also you don't have enough money to probably buy it right but I still want to learn generative AI I want to play with models like metal Lama 3 all the open model that are coming up right so the next option is specifically using AMA see the main aim of learning things is once you learn things once you develop some projects right once you go to the company in companies taking a subscription taking good workstation will not be a problem because company has huge amount of money now in ama what we can do is that we can run all our large language models locally Yes again it depends on the system configuration how fast you'll be able to get the inferencing but you will be able to develop the entire application now the AMA thing is that ama is a platform which provides access to almost all the open source platform or open source LM model so if I probably go ahead and show show you the GitHub right what all models it probably supports right here also it focuses more on open source models so here you can see Lama 38b is also there llama 3 7B is also there 53 is also there mistal is also there neural chat is also there Starling is also there code Lama is there so almost each and everything is available over here for you you this is the most amazing thing right solar is also there right and different different sizes will be there and the best thing is that once you probably download this see now it is available for both Mac OS Linux and windows right if I've already downloaded just click on this it'll get downloaded it'll be an exe file double click it and install it now once I open my command prompt I just have to do one thing see ama run llama 2 okay if I go ahead and write Lama 2 my llama 2 will get loaded if it is not installed for the first time it'll first of all download the entire model right so once I probably do this now I can have an interaction Hi how are you right something like this so this is just a demo to show you that how fast and how quick it is and I really have a powerful workstation I in my upcoming videos I'll be discussing about what kind of workstation also you should probably get right so all these things are specifically there right now it's not only this much if you really want to create projects so Lang chain is one amazing framework which also provides Lama Library so that you will be able to call any models that are there in the local itself right and over here you can download any model any model that you want that is given in this kind of uh open source uh which is specifically using this open source Let It Be llama 3 70 billion so anything you really want to try develop an application you can do it in your local environment right later on if you probably want to do the deployment and all just take spend some money try to see how to probably deploy it in Sage maker ec2 right there are multiple things but the main thing right now now is how you can build gen application so AMA is one thing that I specifically use and right now you can see over here I don't need to have any openi account or API account right initially I used to buy the $20 but I don't want to buy right now because I even have access to multimodels which are completely open source and from that so in in this case there is a model called as lava to so that is also a multimodel right yes for generating high quality images and all there we may require but not right now right so here I'm not using open API account it is completely for free right so this is one and I can also develop an endtoend project from this okay now the thing is that Krish uh let's say Krish uh there is also some uh models like Google gini there are some models like Gro you know I I'll talk more about this Gro and Gemini so there is also one more platform which I had explored and recently had created a video on this jan. a now this actually allows your AI models like Lama or mistra directly on a device now here I don't even have to take open account see so once I probably install this so if you go ahead and install this Jan right you will be able to work with various models this is one of the example right if I probably go ahead and explore the Hub in Jan AI you'll be able to see so many different different models are there and even paid models are there for paid you need to have an open uh let's say if I want to use Open gpt3 Turbo I need to have API for this if I need to have opening a gp4 turbo I need to pay for this right so I probably will upload $5 credit and I will be able to ble to use it now here not only open source model paid models are also there but there you specifically require API and with the help of that you can definitely check out this entire thing in the future they will also come up with some kind of API to integrate in your coding environment or they may also come up with multiple things over here like how you can interact with how you can set up parameters already so many different different options are over here right so this is the second platform that I refer now here I definitely don't require anything see I'm directly using uh Gro Lama three over here right so let's say if I um say hi hi again how are you so any question that I want to ask it is completely free and this is running in my local environment here everything is private right my uh if I do not have an internet I will be able to work through this you know because the model is downloaded locally so this is also one another option which you can probably go ahead with now still you really want more experience you need to work with then you have the gini pro right you have gini pro flash all these models available and you can directly use it with the help of Google API right so this models are free for some number of requests per minute like I think uh 60 requests per minute you can actually uh do it for with the help of this and you can actually create a completely amazing end to end application to just get an idea how to probably work with Gen VA application the best thing about gini Pro is that it has text it has Vision it has multimodels right both text and vision it support each and everything so this is one more thing that I probably refer to probably develop my projects so most of my uh entire experience of developing playlist you'll be able to see all these things then you have Gro now Gro is super amazing uh the reason why I used Gro because this has something called as lpu inferencing engine now Gro when compared to GPU lpu inferencing engine is super amazing you should definitely read a research paper about this how it is very very much fast and just by getting this particular API I will be able to probably use any of these open source models that I have right and I've also created similar videos uh in my channel right so you will definitely be able to see this so this is one another platform that I definitely use and don't have any dependency on paid accounts then you have cre AI right so cre AI is also over here you'll be able to see just go ahead and explore this you'll be able to use your open open source tools you'll be able to turn into apis and do multiple things right so this is another one account that I specifically see now to access all this things and work on all this I follow this particular framework that is called as langin amazing framework almost has each and everything let it be open source let it be anything if you really want to create agents you have options of creating agents so many different different agents are there you can build custom agent uh so I'll just show you how many different types of agents are there see Tav API if you want to probably contact with Wikipedia you'll be able to do it so many different different agents are definitely there so if you probably go ahead and explore this agents like agents basically means what agent is integrated to your application that performs a different functionality let it be a Google search Wikipedia search anything that you specifically want right so you can definitely explore this and right now it is must Lang chain is much because it actually helps you to develop an amazing application other than this I also have created an application using Lama Index this is another one to create def efficient rag models and all right so so many different things are there finally the question that you probably May answering right can we learn generative AI just by using open source L models the answer is yes once you develop this deployment will be something else right we can probably take a cloud we can use any kind of services and do it right yes with respect to fineing I'd suggest to take the use of Google collab you really need to have a good GPU to do that okay along with this you can probably see all my playlist uh like freshen updated Lang chain series Google mini all these things are basically created in my channel so I hope you like this particular video this was it from my side I'll see you in the next video thank you one and all take care bye-bye
Info
Channel: Krish Naik
Views: 11,651
Rating: undefined out of 5
Keywords: yt:cc=on, open source llm models, llama2, llama3 tutorials, machine learning tutorials, deep learnign tutorials, gen ai with open source models, krish naik langchain
Id: hxTzpl4PKBw
Channel Id: undefined
Length: 13min 8sec (788 seconds)
Published: Wed May 22 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.