Create Custom Chatbots alongside GPT4o, Claude Opus, and Gemini 1.5 with ChatLLM

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's imagine that we have a tool where we can use all the loud language models available out there both chadt CLA Gemini and so on inside a single chatbot and user interface and also upload PDF documents you can chat with them and even have Enterprise connectors like slack teams drive and so on that you can connect together with your chatbot so in this video I'm going to show you the appus AI tool and also how you can fine-tune your own large language models and create specialized AI agents you can upload your data set with a few click you can train your own models and even deploy them directly with Abacus AI so let's just jump straight into the Abacus chat llm we have all the different lot language models available up here at the top so we have gbt 40 llama free Cloud oppose Gemini 1.3 Pro abico Ai smok and you can also add your own chat Bots if you have some fine tune ones could also be agent but I'm going to show you that later in the video where we can upload our data set and also fine tune custom chatbots so first of all here we can just go and use chat TBT 40 I'm just going to ask it what is a neural network and then again you can choose which of the models that you want to use all the models they have advantages and also disadvantages so it really depends on what you're trying to do I'm mainly using gbt 40 for General task coding researching looking up new things and so on and then I'm using the cloud models if I'm creating blog post YouTube scripts and so on because I feel it's a bit more humanlike and better for written text and then we also have Gemini 1. five if you want to have longer context window if you have a larger task that you want to solve more text you can probably just take like whole documents code file and so on throw directly into the Gemini 1.5 model so I'm pretty much using those three models and then specialized AI agents and chatbots that you can fine-tune on your own data set so this is pretty cool pretty fast response we can also just go up and copy paste the prompt and test it out with one of the other models so we have this new release Lama 3 from meta so that's also pretty cool model let's just see the responses and we should get it pretty instantly so there we go we can see we get a fairly nice Fast Response now I'm going to show you how we can also upload a document so I'm going to add a new chat so we can use any of the L language models when we upload our PDF documents in here right now I just have this attention is all you need paper which is about the Transformer architecture and the Transformer architecture is the building block for all the L language models out there so have this PDF document we can upload directly into the chat Bo as well and ask it different questions so right now I'm just going to drag it in there we go and we can just ask it what are the authors of this paper there we go it's going to retrieve the information from the paper that we have uploaded instead of generating random text so this is also like how we can create rack based system where we retrieve actual information from our documents and then we paste our LM responses on that so here we can see the authors so we have it 1 to eight and let's just open up the paper to verify that this is act like correct and the correct names and we can even see that it has correct order as well so this is pretty cool we can go in and chat with it you can choose any of the lot language model in here and it's going to generate the exact same results and retrieve the information from your PDF documents so one of the other cool things about Abacus chat but here is that we can invite team members so you can set up different groups different teams add team members by the email and then you can basically just have the whole workspace here together you can have the chat llm all the L language models the chat Bots agents that you have fine tuned for specific task invite the you users you can throw in the email here and it's good to go we have the teams groups and also the different connectors so down here at the bottom you can see we have these different connectors slack SharePoint teams drive and so on and we also have a bunch other Enterprise connectors let's just go and take a look at that so we're going and add a new connector over here to the right we have our team add team members and so on so we have this whole collaborative workspace all working together so here we have all the Enterprise connectors that regular with Abacus so we have databases different Cloud storages Google Drive jira slack teams pretty much all the different connectors in here that we use daily in our workflow so right now I can just show you how easy it is we just need to click which of the services configure it and verify that the service has been connected and that's pretty much it it's just a few clicks right now I'm just going to set up this Google Drive I'm just going to copy paste this drive link so right now I'm just going to paste it in we hit save and then we can verify the service in a second so right now we need to give it access as a viewer we're good to go here the status is unverified let's verify it and in just a second it should change to verified as well there we go now we have connected our Google drive folder into Abacus workflow here as well and now we can use that together with all the other things that we have in here so we have our database connectors file connectors application connectors and also streaming connectors and here we can see that we have this folder connected so now we have seen the chat llm interface with all the different options for the chat bot now we can go and see the other projects that Abacus AI has so we have a bunch of different AI agents so this is kind of like the direction where generative AI is going where now we kind of like have these very large generalized lot language models which pretty much just knows everything but if we want to solve specific task if you have specialized things that we want to solve then it doesn't really work as good and you're probably familiar with that so then we can go in take our documents could be PDF documents databases we can have different data types of well numerical data time Z data and so on we can upload it in here it will take care of all of it it's just a few clicks it's going to chunk it up embedded and then we can f tune our own lot language model on it and build rack based system so you can retrieve relevant information from your documents and data so we can have chat llm AI agents forecasting and planning marketing and sales and also Vision AI so we can even have Vision data sets so you can go in and create optic detection models directly in here the other tab here is custom some fine-tuning of lot language models based on your data so to have Vector store as well it will store all the documents chunk it up fine-tune your lot language models AI agents and it will take care of all of it once it's done training we can deploy them directly in here and set up a whole data Pipeline with our models then we just end up with an endpoint that we can send request to and get responses back so it can be used for our own applications and projects out in the real world so this is how easy it is withit a few minutes probably within a few hours if you have a ton of documents throw them in here and you have a production level AI system up and running so right now let just going and explore some of them we can go inside of our dashboard up at the top right corner we have projects data sets feature groups notebooks all that over here to the left let's create a new project where we fine tune at large language model we can also choose the AI agent over here to the right but let's just go with custom chat B right now I'm just going to call it PDF we hit finish it's going to create our project so now we can go in and attach existing data sets if we had that but we can also go in and create new ones so I'm going to create a new data set we just have this PDF file that we're going to work with I'm going to create a new one from scratch there we go let's just call it attention weit continue and then we can just drag and drop our files directly in here it's going to take care of all the storing chunk of the documents connect it do it embedding store it in the vector databases and so on so now we have our data we can upload a bunch of different files multiple files throw it in here it's going to take care of it right now it's just uploading once it's done uploading we can go in and explore and train our models so we have step one in here we can set up the data pipeline we already did that with our document so it's just a single PDF file for now we have feature groups it's just converting our PDF into that right now we have our document retrievers then we can train our model after done training deploy it and we have an NPI that we can just call directly so no code at all upload your documents train your model deploy them all in here on the Abacus platform so right now let's go and create our document retriever I'm just going to select the feature group attention and we can hit create so right now it's created that we have step one done step two and also step three and now we can just hit train models we're good to go it will start the training automatically we don't have to boot up any cloud services or our own TPU clusters we can just hit train models directly we need to specify the name of model and then we can hit train directly train model and it's going to start the training so now we can see that the model is pending on the training if we scroll a bit further down we can see we both have our data set feature groups document retrievers models deployments and then we pretty much have everything training status has started once it's done we can deploy it with just a single click as well then we'll have an endpoint with an API where we can send request to and get responses back with the responses to our prompts our model is now done training we can see that the status is complete and we can hit deploy directly so here we can choose between an online badge and online badge plus real time let's just go with that we can choose the model deployment name estimated number of calls per second so basically just to scale the resources based on the number of request let's deploy the model it's going to set up the whole instance the endpoint and then we'll get an API that we can send request to so now we have the whole pipeline we have our data set connected future groups document retrievers models and deployments and right now we can see that it's pending and our model has now been deployed we can go to dashboard predictions API create monitor but let's now go and take a look at how we can send request to it so this is just a request example it's just going to throw in what is the meaning of life we run it through a model we get the response back here down at the bottom we can extract information use it in our own production systems so this is how easy it is to use we have trained our own custom LGE language model and now we also know how to deploy it and use it in our own applications and projects so now go and see how we can use it with the chat llm teams as well together with all the big guys so we can choose between all the gbd4 Gemini and your own custom fine tuned large language model so right now we're inside the chatbot interface as well let's just go and take a look at all the variations gb4 Lama 3 Gemini and now we also have our own custom chatbot let's choose that one and let's go down and throw in a prompt so who are the authors of the attention is all you need paper and right now we have haven't uploaded any documents it's just going to use the information from our fine tune large language model and the documents that we have thrown into it then we can see that the authors of the attention is all you need paper are all of these guys here and that is correct as well it's the exact same output as before when we uploaded our own PDF file but now we have our own custom data we have uploaded to Abacus AI trained our model deployed it and now we can use it side by side with all the other big guys in here so that's it for this video here I hope you learn ton definitely go Ahad and check out this Abacus AI tool we have the Enterprise connectors connect them with your chat Bots F tune them directly in there as well deployment take your documents just throw them in there fine-tune them deploy them use them in real world AI applications and projects so I hope to see you guys in one of the upcoming videos until then Happy learning so we also have an AI career program if you want to learn how to land AI jobs and get AI freelance work I teach you everything in there we have programs all my technical courses weekly live calls personal help and I would love to have you guys in there help you out in any possible way you can check out the program down description and the community and then I'll just see you guys in there
Info
Channel: Nicolai Nielsen
Views: 2,014
Rating: undefined out of 5
Keywords: create custom chatbot, abacus training, llm explained, llm tutorial, llm rag, gemini 1.5 pro, gemini advanced, llama 2, llama 2 tutorial, chatgpt 4o, chatgpt, chatgpt how to use, claude 3, abacus tutorial, claude 3 vs gpt 4, llm rag vs fine tuning, llm rag project, training llm on custom data, training custom llm, rag based chatbot, rag based llm, vector database, create custom llm, abacus.ai, which chatbot is the best, chat llm, custom chatbot, abacus
Id: 0xc_7HrejjM
Channel Id: undefined
Length: 11min 40sec (700 seconds)
Published: Thu May 23 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.