Laravel Chat with Database: Filament and OpenAI GPT Example

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello guys today on this channel I will demonstrate you an old idea refreshed for 2024 this video ask your database with help of AI and in this video we will refresh that with new technology so that was based on the package from Beyond code called ask your database or ask DB and I decided to refresh that with a new project of fil so we will ask filament database for how many leads were this month or that month or something like that and the idea is based on a real job from upwork so this is upwork job AI chatbot it doesn't necessarily say anything about larl or filament but it requires uploading CSV monthly leads and then asking that database with chat GPT integration at first I thought to implement that idea for my second Channel about python because it mentions python but then I thought that filament is a perfect tool for that so you would have leads table and then somewhere here or separately you would have a chat so for example how many leads were last month send message waiting for AI response and in a few seconds chat GPT should ask the database and give us the answers of 84 leads now how does it work in the code first let's take a look at the filament part so in filament you create a filament page in this case it's app filament Pages chat PHP extend page and then you create a form in this case it's just text area with data as a form object and then you have the blade of that page which has that form with wire submit create this is a live wire component wire submit create then calls the chat PHP create method which is this we have a few local variables of waiting for response or not reply is empty or not so we have that message and then send that message to the event dispatching query AI that query AI with Live Wire syntax is executed with on attribute here and is waiting for reply and we'll get to that GPT engine in a minute but basically that dispatching and on what it does is that otherwise query would execute and would not wait for the response from the chat GPT I mean GPT model is not directly chat GPT so we're quering that open a API and only then provide that weight waiting for response is false and last question is this because in the blade that waiting for response is this so should we show that button for send message or not and then reply should we show that reply or not now let's see what's inside of that GPT engine so this is the class GPT engine and the main method of that is ask that we would call and then I will dissect it method by method in upcoming 5 to 10 minutes or so but first disclaimer a lot of it is still based on that package I demonstrated a year ago larel ask database by Beyond code so credits to Marcel and Beyond code but unfortunately since then for a year it wasn't really improved or maintained it was more like an experiment which was exactly like that at the time everyone was experimenting with AI so in our case we took a lot of the logic of that package but refreshed it to newer GPT models some more logic to make it work in our case but still a lot of credit goes to Marcel so anyway we have ask method first we need to get the database query in general we need to have two queries to the open AI API to the GPT model first we want to ask that to transform the natural language text like how many leads were last month transform that into valid SQL query then we execute that query to the database and then another call to that open a API would be to transform our SQL results in into a human language back to the chat so let's dissect it step by step so first the method get query from users question first we need to build the prompt to the GPT model and this is quite complex by the way for that we get all the tables from our database if we take a look inside we have schema manager and list tables and then from all those tables we filter the matching tables for that we have a separate view blade view of tables so we'll list the table names and ask GPT to find the relevant ones for the question so that's first call to GPT and actually now I realize there will be three calls so this is the first one we query open AI for that matching tables query open AI is a method that uses a package let's open composer Json open a PHP Lal this one which allows us to do something like this chat with GPT this is one of of the latest models at the moment and this is our prompt and we get the result so if we go back here with Filter matching tables and perform some collection operations to have that as array and then we pass that array into another view of Blade which is prompt as database query which will be prompt to GPT which is a huge prompt that's why it makes sense to have it as a blade file but that blade would not be returned as HTML like a website or web page it would be just for building the string and that string is pretty huge so you have guidelines you have context for the GPT model this is actually how you should build the prompt the more information you give it the more accuracy you will have inside we have for each Loops if statements collection operations so typical blade but passing the information like tables here question and emphasizing that it should be the result correct MySQL Query for execution so the result of that we will build the prompt then we get back to get query so we query open a again and then as a result we'll have SQL query returned and then we need to ensure that it is safe we have a separate method for that there are of course multiple ways to do that but in here we choose kind of the most simple one just filtering out the words like insert update drop or something like that because we do need only select queries everything else will be considered unsafe but of course the security should be add it also on user level to have the permissions to the database for only select for example so this is kind of like additional validation so as a result of get query we have the SQL query then in our main ask method we evaluate query so we run that query in our real database so this is the result from the database and then we pass that result to another prompt to the GPT model to the AI to actually return the result we built the prompt again from the same Big View file of prompt the blade file we perform another query and get the answer to the user as a string so this quite complicated logic to make it all work and of course it's pretty hard to fully trust the AI answers because it may be correct only in like 90% of the times but generally where there any leads in 2022 for example let's see what it comes up with know the word no leads in 2022 or who is the most active sales rep last month I'm not sure if it will give the answer we will see so in some case it doesn't have the answer I'm not sure on which level maybe it wasn't able to identify what is sales rep and that it should be taken from leads table so yeah I will put it all on GitHub you can experiment yourself but this is an example how you can chat with your database with an example of filament but it doesn't necessarily have to be filament and in the background it uses open AI API what do you think about this example would you have done something differently or what ideas you have for future projects for us to experiment with AI or GPT or something like that share your opinions in the comments below subscribe to the channel to get more videos like this one and shorter tips about Lara Live Wire and maybe filament and see you guys in other videos
Info
Channel: Laravel Daily
Views: 5,991
Rating: undefined out of 5
Keywords:
Id: n8o4cN7wEmw
Channel Id: undefined
Length: 8min 28sec (508 seconds)
Published: Wed Feb 14 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.