LlamaIndex Webinar: Build No-Code RAG with Flowise

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone um Jerry here welcome back to another episode of The Llama index webinar series um I'm pleased be joined by Henry from flow wise AI which is a very popular uh low code no code open- Source visual tool to help you build Uh custom orchestration with llms uh including a lot of application use cases uh agents rag uh and we'll be giving some of those examples today um and so Henry will first give an overview of LL index do a short presentation I mean sorry not LL index flow wise flow wise um we we'll first give a short overview of flow wise uh talk go through a little bit of a demo application and we'll do some Q&A uh towards the end and so yeah if you have any questions please feel free to drop it in the comments um in in the chat and then we'll go over it in the last section so without further Ado Henry take it away perect and thanks Jerry for having me here I'm really excited to be doing this one I think it's been a long time at least three four months for me since I last do a webinar so probably a little bit Rusty um so feel free to stop me any time or if you have any questions just pop into the chat I'm happy to ANW any of them so I think to start it off um i' like to say uh what is you know what is flow wise like what's the purpose of flow wise and why do we even create flow wise I think it came from my experience I I used to work in a company called fet investment is an investment uh brokage firm as you can see it's not like a technical uh firm like meta or Google you know things like this so internally we have our own uh Innovation group where we try to implement different AI Solutions and that's when we kind of Hit the bottom neck saying that oh you know the AI space is moving too fast and you know there's a lot of expertise involved to create you know this kind of a r applications you know you want to F tuning you want to do different prom engineering and it is very hard for you you to kind of like give it out to your engineering teams to use you know all the products they have been creating so that's when you know I have the idea of creating you know this uh low code no code interface where if you're are not a professional um data scientist or not a professional uh you know software Engineers with expertise in AI you can also participate in all this you know LM Evolutions you can use drag and drop to Simply configure you know the applications you want to be build the data source you want to consume and you have also the flexibilities to use different Integrations like L index L chain you have different Vector databases as well so on the high level we just want to provide um a very good developer toolings to the developers not just for the data scientist world but also from all different web developers world so that's the purpose of flse and why we created in the first place and without further Ado maybe I can jump into a demo um you know a video demo speaks a thousand words so I will share my screens short yeah so this is uh flowise um so we can actually turn it on uh I think the the light mode is too shining so I I'll just put it as a black mode um so on the high level we provide developers to easily create different chat flows so chat flows basically means if you click onto one of them it's like a drag and drop elements when you can configure your own orchestrations uh Frameworks right so let's say in this case I'm actually as you can tell from picture right um I'm using a text sper then I'm using a PDF file then I'm using pine cone and embeddings so I'm doing an upsert in this case but I'll go I'll go over this in in the details but the basic idea is that you have different chat flows where you can build your own customized um LM apps or your orchestrations we do provide different templates um so the idea is to get people started off very quickly because a lot of the beginners or people that is coming outside from the AI World when they started to use you know they want when they wanted to play around with this stuff there are too much informations out there and you know there are too many to too many different techniques so they don't know where to start it from and they don't know how to use it so we provide this kind of prebuild templates that allow our community our users to see oh this can actually be used in this way and you know in another way right so for example we have uh integration with Lang train Lama index so we switch over to Lama Index right so we provide the fall um as a status there's still a lot more to to do to do in in the future but for status we provide four preview templates which are go of them and uh after that where you can see like each of these templates is kind of giv you is is giving you a different idea of how you can use some of the uh Integrations on Lama index so query engine uh context engine chat engine and and sub questions engine as well um just to highlight a few different features that we have uh tools where we can actually create different tools to be used with agents um it's actually just the function I know the hood so you can write your functions over here um assistant is the open a assistance so we can create the credentials different Keys um variables but if you have um if you want to import Dynamic variables into the chat flow you can do that as well um of course the API key is to protect um the chat flow as well um if there's no further questions I'll jump into uh a very simple chat flow to begin with so let's say the my next run right right um first of all I want to show the quir engine um so users can very easily from the preview templates they can click create or use template so this will brings you into new canvas so you know everything has been connected uh for you what you have to do is just put in um your credentials or you know your different configurations so let's uh I see Tristan are do have any questions or let me see uh TR if you have questions just feel free to put it in the in the chat and we can go over it um yeah throughout the process um so I'll keep going the yeah um the quy engine right so I think um it's a very simplest um uh engines I think on the high level what it does just basically it consume um the informations from a vector databases and be able to answer the questions based on you know the documents that you have upseted to the P to the vector database in this case I can use pine cone right of course if you go to the plus sections here Lama index there are different uh option as well so currently we have a pine cone and a simple store which is a very simp uh simple implementation of vector Source but moving on we'll be adding more and more like mongodb uh post gr as well so for now it's only uh Pine con store right um so in this case what I wanted to do is I actually wanted to ask questions about this um let's say this this documents so I have a document which is a form 10K for the apple and then I have another documents for Tesla as well so I wanted to ask questions or or or put it this way I wanted to build a rack to ask questions about the documents which is the Apple form 10K at Tesla form 10K right and to do that let me save this workflow first quiry engine to do that the first step of course if is to upsert the documents to a vector database so here you can see in this case I am using a text splitter to split the text from the PDF documents into different chunks and here I've uploaded a Tesla not sure can you guys see it the Tesla um PDF documents and also the Apple document as well right and then what I have to do is just connect it to the Pine con uh note here and with the open air embeddings but you can also use different embeddings provided by Lama index as well and the next step is to fill in the credentials for your Pyon and your Pyon index and here I'm using a n a name space uh Pine form 10K so uh I've done it previously so what you have to do is see this little green icon at the top right so once you click that you can have all the options um using API to upsert as well and you can also configure different configurations as well and when you click the upsert um I won't do it again but because I've done it so here is the form 10K um which I have previously upseted and you can see all the data here um from the PDF see um source to provide uh assessment etc etc so one thing that I want to highlight is which is which is quite important is when you do an upset you have to specify a metad data so this is for uh when you do the quiry you can actually get a very precise data by specifying a meta metadata when you do the quiring so here when upsetting I'm specifying a metadata Source Tesla and likewise I also specify a source for Apple so if you go back to the Pine con dashboard you can see here I have a source Apple I also have Source Tesla as well right so I will show you why we have to do that uh when we do the subquery engines but again bear in mind this is very important um for you to have a better or a better results uh from LMS I think that's the high level once you have finish the upset everything you have checked that okay now it is uh available that you can see it on the pine cone so the next step very naturally is do the acquiring stuff which is The Bu The Rack based um applications right um so here let me change this I mean we can use entropic but um I'll I'll use opening eye for uh for the demo sake open Ai and then I'll just connect it to the chat model here and let me put in the open air and b small so here um here we have options to specify a response synthesizer I think this is a very interesting um features uh from Lama index where you can actually better compress or um uh specify a different ways to Returns the output from LM so we have provided a different options for uh response synthesizers you have compact refy refy simple response Builder entry summarizer so I think if you search for the Lama index uh TS documentations or even the python python documentations you can actually see the U uh different uh result here as well so so now um let me fill in the pine con um credentials wasse index and here I can specify a pine con what is it uh Pine con form 10K okay save it um I'm doing this in real time so I'm not sure what happens maybe it Returns the error something but let's see um so I want to ask let's say what is the let me actually put it in the bigger window what is the um business focus area of Apple so let's see if uh this works yeah so you can see um the the an is written from LMS but also you can see the um the source documents or the citations from the pine Conor database as well so this is a very simple R applications but the downside is that if you want a more human interactions um uh conversations let's say you you type hello right you just want to have a basic chat you don't want to look out for Vector datas but again in this case it will always look up for the context from from the factor basis as you can see based on the given context information the query hello you know it is giving answers that is not what you're looking for right so we are expecting something like um hello from AI as well right but the thing is that you will always look up for the context from theine cone so that's the um that's the nature of the quiry engine right so to bring in Next Step um actually I want I want to pause it here um any questions or from the chat or anything yeah there's a few questions in the chat um maybe just to start with um I mean I think some of these are are maybe a little more forward looking um can you implement stuff like gbt 4V or lava or what are your thoughts on incorporating multimodel capabilities into FL wise we are launching that next month uh next week actually so um there um I want to keep it as a secret so um there will be a new features where you can actually turn on the multimodel options from you know from from the LMS and from a chat then you have the ability to actually upload an image and use GPT 4V for that and even uh convert Speech to Text uh text to speech as well so multimodels coming in next release which is probably next week so great um another question is um so for um the open AI like the the credentials basically I think one of the questions is do you need to like put in your open AI key uh or like how does that work um and and generally for these credentials do you just like put it in into the the box like in the UI um or how do you specify like authentication so what happens so we are in Wood source project so we don't um you know hold your keys or anything so you deploy your own flow wise instance and once you create a credentials so it will be created uh in the uh databases when you splin flow wise you have different options to use different uh database you can use postgress you can use MySQL you can use SQL light and once you create a credentials here so everything will be encrypted using a secret key specified by yourself so we have a um pretty clear documentations on the credentials um stuff that we have on the flow wise feel free to take a look um here authentications chat level app level as well but yeah feel free to take a look at our documentations um the idea is that you when you create credentials you will then use the secret key that specify to encrypt the credentials uh and store in your data in your database great and then the next question is so Texas SQL I think is still under development in L index.ts but like so what you showed right now is a rag flow um do you have like other for instance flows over structured data as well for structured data like SQL yeah like xql yeah I think we don't have that yet um but that's definitely one of the biggest uh you know interest from the community as well um we do have something from Lang chain but uh not yet from Lama index so yeah we'll bring We'll be bringing those Integrations uh very soon yeah well on the L side I think we still need to write that um but it'll be it'll be coming soon so so hopefully it should be easy on your end um great and and so I think that's yeah I think that's that's uh those are good questions for now maybe we can continue with the demo there's some more questions on the chat but I'll surface some in a bit all right so back to demo right um I talk about the downside of having I mean not the downside but the nature of current engine is that it will always look out for the documents uh from a vect database right and if you are looking for something like human interactions we have to use actually let me delete this chat flow uh let's create a new one right so go to the plus go to Lama index here we have different engines that we can use um so let's start off with a very simple chat engine right simple chat engine that means you are just having a basic conversation basic conversation with LMS so here you can see like we need a chat model to be connected and a memory as well so we're going to put in the chat models from here uh open AI models and one of the good things that um you can actually use the memory blocks from Lin as well so here I'm using uh memory blocks from Lang chain let's use radish I have a radish uh instance on my Docker uh so let's connect this and like if you want you can also specify system message but I'll just leave it bang for now um let's save this let's say chat right soend this one and in this case if I you see if I do a very simple hi hello you know you can actually see like a very uh humanlike interactions from LMS uh why the sky is blue you know you can ask questions and things like this it's just chat GPD right but the the the downside on or not the downside again is the nature is that it doesn't have the ability to look out for the informations right so that brings me to my next engine which is called the context chat engine so let me delete this one again create a new one from scratch um go to pass L index engine so context chat engine so as you can see we need chat models we need Vector store retrievals we need memory as well and you can also have the ability to return the source document and also specify your own system message right so like sometimes if you don't want the AI to answers anything beyond the scope of the uh the contacts you can specify system message like hey I want you to uh answers you know um docu answer questions just related to the documents and if it is not related say um H I'm not sure and stop after that you can specify system message so um but now let me try to put in the connect the flow so we need a chat models and here we need a vector Stores um I'll go back to use the pine cone one and uh we need a memory as well so go back to the notes Here memory I'm just stick with uh R this as well so put in the like you can specify different credentials uh I'll just use the pre uh configured one then uh right now is the um the boring uh task where you have to fill up the uh information stuff so Pine con form 10K and uh connect credentials open AI okay looks like we have filled out most of them chat models as the check and pine con oh Pine we need actually the chat model and Bings so I think um Jerry you guys are getting rid of the service context right um yes we got rid of it on the python side but it's it's coming TS side H don't worry yeah hopefully it won't break too many things right I think like um because like we need those two because of the service context but let's see if uh the new updates can kind of like bring it to more simplified version yeah yeah um now let me put in the chat opening eye again and and of course the embeddings so um connect the notes together um okay just check again looks like we have everything configured correctly so let's save this and I want to return the source documents safeties so now if I go back to say hello um so it it actually uh returns a very um humanik interactions but it will still look out for the information from vector vector databases but the the score of these dat databases is very low so we do actually um not for now but from the lch implementations we do provide a users a users for way to specify which threshold they want to set um but that's coming soon for the L index integration as well but again for now like you can see that you can just do a very basic you know humanik interactions um why the sky is blue things like this um let's see what it says Sky P blue blah blah blah right okay so this is all like human interactions like let's say if I want to ask questions about um the form 10K right which is you know the actual Source document let's see if it be able to do that what is the business focus area of Tesla the business FOC Tesla is design manufacturing and blah blah blah um but you can see from the source as well um that where this is all coming from Tesla um and actually you can also go go back to the p PF document and compare the line so it's line 14 to 16 page number 19 uh Etc so so see like this is kind of like the improvised version combining the quir engine and the simple chat engine where it can do human interactions but also you can ask questions you can do the rec stuff as well you'll be able to fash documents from Vector day bases when it is needed so um this is all good um so I think that's the high level of or I I say that's the most uh widely used cases where people create you know R applications but when they want to have a humanik interactions but also ability to look out for documents so the next thing that I wanted to show is the sub quar engine so I think Lama index have a very uh interesting uh SEC insights where you can select the uh you know different um form 10K or form uh SC 10K then you can actually ask questions what are the main business focus here uh let me grab this so just a very simple um showcase here like you can see the questions will be broken will be broken down into different pieces and uh what are the main business focus area of apple and then what are the main business focus area of Tesla right so then you in in the end you'll be able to combine that uh answers into the final format here right so the next thing that I'll be showing is to create not full um uh backbone or not full uh logic behind of this but a partial one but you can get the idea of it and you can further improvise on it so let's uh do that I think um let's create from scratch and here I wanted to put in an engine which is called sub questions uh query engine so you can see from the description as well it breaks a complex query into sub questions for each relevant data source and then at the end you will get all the intermediate responses and synthesize a final response so to start off we can see we need need um the quiry engine TOS we need chat models we need embeddings and you have the option to specify the response synthesizes and also written Source document right so let's start off with the quir engine tws um just putting the quir engine TOS so right now there's only one two available but um we'll be adding more as we go let's connect it uh first as you can see the quir engine tool is um is having this base cor engine as a predecessor right so we need to have a COR engine as small and then you can see from a COR engine uh which is the the first engine that I have shown earlier where you need to connect the vector store retriever right um so here I'm using the pine con and uhor okay we we link the chat model and edings later on but I wanted to show um what all this means right so first we will be specifying the pine cone uh details first for wise index and here's the important piece right the pinec con name space uh let me specify this as well and here I wanted to specify a metadata filter so as you as you guys can remember previous ly we have upseted the documents with uh a specified metadata which is called Source right so documents which is belong to Apple will have a metadata source as the apple and documents with Tesla will have a source as Tesla so here I will be specifying Apple first and this will only make sure that we retrieve the the embeddings from just source of Apple so we ignore the source on Tesla um so in the other words this make sure that we only retrieve data from the Apple uh form 10K right and here I need to specify a two name uh you can just specify Apple two something and you can specify what the two is doing right so you want to let LMS know when to use this tool or what is this tool about right so so here I'm just saying um these two uh useful when you need to search uh answers regarding uh Apple uh SEC form 10K um yeah something like this will do so let's just click save and um so this is a like the cor Engine 2 is where you'll be able to uh search the vector embeddings from Pine con which is only limited to the source apple right so the next thing that we have to do is to kind of replicate or duplicate all of this so we just click a duplicate button here corent engine tools and of course we need to do it for the Tesla as well so let's just do it for Tesla you when you do search answer regarding Tesla um actually let me spe the ticket as well and go back to here apple a a so let's uh duplicate this one cor engine also duplicate the pine code so we just kind of need to um repeat the stuff that we've been doing uh but this time change the source to Tesla so this will make sure that we retrieve the vector embeddings only from Tesla and connect the rest of the pieces together right so now we have let's save this before lost this flow um let's say sub quy so now the quing engine two part is almost done unless the chat model is eding SES uh so we're going to connect the rest of them so bar me when I go through all this uh chck open AI and then put in the embeddings so one of the uh things that we wanted to improvise is to prevent too much of the duplicated chat open air and beddings so once the uh the service contact stuff is being upgraded hopefully we can uh have a much cleaner uh flow as well so open AI uh let's put it here uh uh connect this together and just put in for embeddings okay so Bings edings and let's look at the okay we have to fin credentials credentials is one save this so on the high level um you can see that we have we have these two uh we have two tools which is connected to the subc cor engines and then the rest is the check model and embedding stuff right uh you can have the options to uh connect to the response synthesizer as well but for the sake of demo um we skip that uh return return Source documents so I I'll explain what happens under the hood um after we see uh the response coming from LMS so let me just check again to see everything is configured correctly cor engine base cor Engine 2 from 10K okay so one thing maybe I want to do is to use gp4 as well um this might have a better response so so now um if I asked the questions which is similar to what I have uh you know ask here what are the main business focus area right so go back to here uh let's see what happens so um while waiting for the answers under the hood again it will breaks into different uh sub curry let's see sub questions what are the means so this only what are the means because for zero of Tesla B Tesla products so for now it will only be able to um break it down into one particular questions which is what are the main business focal area of Tesla but we want to have the apple as well right so here's the thing we need to specify a better two descriptions because if you look at the source code of llama index uh as well like the subar engines um where is it yeah so the basic idea is that you provide different tools with descriptions and um you give an example of the output so right so given these two what are uh so these are the expected outputs so you can see the actra prompts which is being used here um so what we have to do is so let me my index let me specify a better two descriptions uh which I have used earlier something like this uh a sec form 10K following describing the financial of Apple for the 2022 time period uh I think the same for the actually for the Tesla too SC form 10K describing the financial of Tesla uh TSL a more time periods okay um not sure if it will work but let's try and see if it actually improve the answers right so if we type the questions again hopefully it will breaks down into two different questions like the idea qu sub questions that you want to get is what is the main business focus area of Tesla and what is the main business focus area for Apple so we want to have two sub questions so now we can see that we actually get two sub questions what are the main business focus area of Apple so the answers is this and then we have the what are the main business focus area of Tesla and you can see the response so it's uh the format is a little bit um uh not too beautiful but uh hopefully you can improve it in the future but you can see the basic idea of it right so it' be able to breaks down into different questions and combine the answers at the end so that's the um the powerful of the subur engines where you don't actually need to come up with your own um strategies of your techniques to do this because you know you have all of these um proms of these uh techniques from llama index already so that's the uh sub question quir engine but again I would say the nature or not really downside against nature is that you cannot do sort of like the human interactions because when you do hello um so it doesn't work so the the next step is to actually bring it to uh an agent so you can actually what you can do is you can create a COR engine to and Link the sub question cor engin to the tools and the tools can then be connected to an open air agents so I think the the the real implementations behind the SEC insights is that they have this uh what we call the qu quantitative data or qualitative data uh when you want to search documents from uh the PDF but if you are looking for numbers or more of a quantitative data um you have to create a separate tools I think you guys are using polygon doio to kind of you know get the real time uh price ticket or more of the the math uh you know the D stuff so I think this can be easily done when you when so one thing that we provide is that you can actually create a custom to uh create this one and you just need to write the functions to call the Polygon AI uh API right so um I won't be showing that in in this demo but um this is the high level of Sub sub cion cor engine so great this is um this is a fantastic demo also um I'm just incredibly impressed that you actually just created everything from scratch instead of running through an existing notebook and everything generally worked on the first try um that that is impressive um I think be the uh so so we have a few questions in the chat um I think maybe in the last 10 minutes there's actually a ton of questions I actually don't know if we're going to be able to get through all the questions um I figured we could go through uh some of the core questions that people are asking um and then maybe go through some high Lev questions too and then that'd be a great way to cap off this uh this great webinar um so first is so on the data side you mentioned there are some data sources and you you know you want to connect it to a vector database um there were a few questions around that so one is is there an ability to Define and customize metadata and how would you think about like the like data aspects and and the degree of customizability you want to give to your users yeah I think um that's actually great great questions so for now what you can do from the let's say the uh documenters where where where is it let me open up upset yeah so what you can do right now is to specify a metadata from here um you can just um you know it's a basic Json key value format but we are actually in the works of creating a custom uh document loaders where you can actually create your own implementations let's say if you have a very complex metadata filters that you want to do or you want to use um which is not possible from the UI itself um we allows you to write your own code to do that um but that that again is in work uh not yet released but we understand that a lot of De developers let's say they have their own API they want you to fetch from their own document sources they want to have kind of a very different complex met data um so for that you have to use a custom document loaders to do that but I would say um a lot of the use cases can be satisfied with you know this kind of UI adjon key because you can basically do anything you can do um greater than you know equal to this kind of a different um uh formats as well so um yeah uh the next question is so you know after you load the data uh and you define metadata then how would you choose your trunk size um why like how would you think about the default values and and when would the user want to customize it yeah so here you can specify uh we provide different text leaders so you can specify different chunk size and overlap um again this is more of an experiment on your site every documents have different um you have to play around with different chunk size right but I think the default one um is usually a thousand and uh chunk overlap is usually left empty I forgot was it 20 or 40 but um again for my S maybe Jerry you have a better opinions um for my side I think um it kind of depends on documents let's say if you have a PDF which which has a very a lot of tables and you don't want to break a lot of informations from a table so let's say if you sp 5,000 and you breaks your table into half that's when you lost the information right so maybe for that kind of a PDF where you don't want to lose a lot of informations you have to specify a bigger chunk sizes uh or chunk overlap but yeah maybe a kind of general question that um I had is so here you know we went through a lot of rag use cases which is awesome and you can show that this plugs to agents just more generally like what are some of the core use cases you're seeing that your users build is it like a lot of rag use cases is it also agents is it just like kind of more constraint flows um how complex does it typically get yeah I think the the most Val used cases is the con contact uh context check engin where I've shown earlier uh which is this one this is the very basic wreck and the second one is the agent stuff but I think uh you guys just released the open air agents last week or something so we have yet integrated into flow wise but uh in terms of langry implementations we have seen a lot of people creating uh agents which use the you know the function calling stuff where you can connect your own tools because in flow wise you can easily create your own tools custom tools um by just specifying the you know your functions so a lot of people have been using the agent stuff and you know the the uh Rec stuff as well so I think those two are the most wiely used cases yeah makes a lot of sense um the the next question I think there's there's a few parts in here but um going back to the data so you know you offer an interface for users to specify metadata um you offer the ability to allow people to find like text Splitters um are there General like kind of data processing needs that you're seeing people have on different types of data sources like complex PDFs like markdown um should users understand like how to process their data and and try to do it with inflow wise yeah I think the best we don't try to come out with our own Solutions but I think the best um uh Integrations that we've seen is the uh from unstructured IO right unstructured IO allows us to let's say if you have a PDF with a lot of tables and graph and Etc and structure IO will be able to extract um all of the tables the graph um without breaking it into or without splitting it right so I think if you want to have a better clean preprocessing um you can try use the do loaders we do provide integration small un structure IO uh where is it uh yeah so folder loaders or the file loaders right so these provide a much more uh comprehensive um uh pre-processing before you fit it into the factor databases um apart from that I think oh another thing is that we are working on the D duplications because let's say if you have um all the indexing peline right so if you have um the same documents but you have changed slightly of the documents but you don't want the same um vector vector embeddings to be upseted into Pine again let's say maybe you just change one of these uh words here right but when you do an upsert imagine that you have to do a whole thing UPS again so the whole thing here times two which is very um you know it's it's just a waste of the space just a waste of money right so uh we are working on the indexing pipelines to prevent users to have this sort of a duplication as well so I think overall um yeah so going going back to my conclusions if you want to have a better pre-processing try different Integrations un structure um or try to have a better me better metadata and um yeah that's what I recommend great yeah um that that second part was actually going to be my next question which just how do you handle like updates right and and how do you handle like upserts as opposed to just like defining it once over a static data source um yeah the uh uh another part of this question is just um are you seeing a lot of users build this or use this uh locally like kind of locally hosted uh on Prime if you will or are you seeing a lot of users also use some sort of more Cloud hosted version we don't have a cloud hosted version yet so for now it's just the uh Local Host where people deploy locally so we have been working with um I know ads is one of the big big users of uh of flow wise they have been uh deploying flow wise for their own uh you know engineering teams and uh Cisco as well so there are a few big companies you know uh normally deploy flow wise on their own Prem services so I think that's kind of nature of it because uh corporates or you know uh companies want to retain their own data so maybe they don't feel too comfortable into you know um uploading all documents to the cloud so I I think this is um just what they what they are looking for so is is the primary UI like the interface the the chat interface or are you also going to kind of like export this like like some sort of config and format so that users can def find their own like uis and experien so you can actually export the chat flow and share it with your teams or anyone so easily can just load the chat flow as well so when you export it it will become adjacent format um so yeah is is that what you uh asking about or yeah yeah basically so that you can export It And format so you can use it with like I guess you can do stuff on top of it later yeah exactly and uh just to highlight these addition pieces uh we have the view message tab where oh actually let me go here um where we allow users to see all the interactions that um people have been interacting with a chat flow from different places so we provide Integrations not sure have I highlight this but we provide Integrations with if if you want to embed this chat uh chatbot into your own website uh we do provide script for you to do that just simply copy paste into your WordPress you know webflow or any HTML page um you can also uh use it as a react component then you have you can also have the options to use it as an API and lastly you also have the ability to when you open up a new tab here so this open up a new chat boot that you can share it with other people as well so you can see all the interactions happening from from uh from the embeda size from API from UI s so yeah yeah it's that's awesome um and like uh and and this is kind of a broader question going along with that like are you seeing a lot of your users are actually just developers like um like people that know how to code or are a lot of users uh people that go in that don't don't know how to code so there there a mix of both um I would say most of our users are still developers but not really like you know core or a very very experienced like 10 20 years kind of Engineers right so I say most of our users are like um innovation groups from you know a corporate when they want to see how they can integrate AI into the applications like a lot of companies that we have worked with are not tech companies so these companies like uh Finance Industries right they are using softare you know uh like take for example my previous company Fidelity you know uh our software is the brokerage firm which is a software right but technically we are not the software company so we don't have too much of resources to spend into the AI so a lot of developers um are using flow wise as a tool for them to quickly play around with different promts and do evaluations and you know create prototypes to be put into Productions so companies like manufacturing companies from um iot uh devices as well company from accounting uh banking finances so these are our main uses developers from different companies yeah awesome and then the last question is just um yeah what's next like uh in terms of anything you're able to share um just uh like what's on the road map for the next like six months a year definitely so I think um the the so in terms of Lama index Integrations we definitely want to bring it the indexing pipeline where talked about earlier we want to prevent users from having duplicates for the vector embeddings I think that is very important if you want to use it in Productions so that's the number one thing and the second thing is the open air agent stuff so right now is uh for the Integrations uh of Lama index you can only do rack based applications we have yet introduced the opening air agent stuff but very soon um also very easily we can do that as well so uh the Asian stuff um the indexed pipeline and the other important pieces is the um observability so right now it is very hard for you to see what actually happens under the hood line by line like what go through these LMS what go through the next LMS so we wanted to have a better um you know analytics so right now we do provide a different um analytics provider like L Smith L FS LM monitors but we plan to add more and also be bring it to Lama index integration as well so I think that's the top three uh road maps or the features that want to develop for index and again like we want to provide the the best you know um applications for users to quickly you know test different things and be able to achieve the for the best use cases so great awesome well Henry thanks so much for your time um this is a great demo a lot of discussions in the comments and a lot of questions asked and so you know we'll have this reporting up on YouTube if you've missed missed this or missed part of it uh and if you have any more questions feel free to drop it in the uh upcoming YouTube video that will release all right thanks everyone happy Friday all right take care thanks thanks bye
Info
Channel: LlamaIndex
Views: 7,168
Rating: undefined out of 5
Keywords:
Id: k5Txq5C_AWA
Channel Id: undefined
Length: 50min 9sec (3009 seconds)
Published: Sun Feb 18 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.