Jerry Liu - What is LlamaIndex, Agents & Advice for AI Engineers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning everyone how's it going today welcome back to the podcast in today's podcast I'm going to be talking with Jerry Le Jerry is the CEO and co-founder of llama index we're going to be talking about llama index what it is and what you can do with it in case you don't know llama index is a framework that allows you to create large language model based applications although we're going to be introducing it with a little bit more detail in a moment we're also Al going to be covering Lama pars their API for parsing on structure documents and finally we're going to be talking about lamac Cloud which is their Enterprise solution to see what they have to offer and how Lama index and lamac Cloud come into that in this process we're going to be covering some technical topics and explanations as well about how the systems work including topics such as advanced Rag and data processing so it might get a little bit technical color times in addition by the end you will have some advice from Jerry on becoming an AI engineer and starting a career or a startup in this industry if you're watching the video version of this episode you have subtitles here and I hope that the subtitles will help you to understand the more technical topics in case you don't know what a concept is you can always look at the subtitle and look up the concept I also added some notes Within the subtitles to help you find the concepts more easily so yeah I invite you to consider this episode as study material it is not always that we have Jerry Le himself explaining these topics to us and during the conversation we try to make them as approachable as possible so without any further Ado let's get started I bring you Jerry Le Jerry thank you for joining I thank you for having me so let's talk a little bit about L index and uh just to start can you please tell me what is llama index great so llama index is a data framework and platform for helping developers build llm applications over their data and so we have two main product offerings we have an open source component which is a very broad orchestration fol kit enabling any developer to compose different modules together to for instance like index data put it into a vector store and also build different types of llm applications on top of that data this includes uh some very popular Concepts popping up these days uh including for instance retrieval augmented generation which is the core of building like a trap lot over your data and this is also this also includes use cases like agents which includes more autonomous um pieces of software that can go and automate workflows perform actions for you like send an email schedule a caliber meeting um write code and do other things as well I see I see many people talking about agents uh recently that's really taking some traction isn't it definitely I think there's been a lot of interest in a variety of different use cases I'd say the top most popular ones are probably uh rag and also agents um and there's actually you know we talked about this in in some of our talks uh progression between them um because we see this as like an overall Spectrum as opposed to like there's like a fence between every different uh use case um rag is really just a very simple mechanism of doing search over your unstructured data and there's ways that you can basically add a bunch of agent ingredients to make it more agatic amazing how does it feel to be part of the driving force of this this industry yeah definitely I mean as a Founder it was uh I'd expect it I'm also very grateful for the opportunity i' I'd say a lot of uh the reason the company got started was because we were at the right place of the right time um but also you know it's one of those things where we we saw it and we really try to take full advantage of this to basically create something that was really powerful um and this includes both the open source side uh which you know includes the technology but also the community um so basically you know a lot of uh the Llama index why why people like the the brand and the community is because of our um kind of education resources and ways like showing others how to build different types of things um and and not just in like a hacky way but also building at well um and so that's something that is core to the company and something that we want to continue to invest in um because part of this like ecosystem right part of this like geni hype uh is fueled by actually developers knowing how to build stuff and if we can continuously be on top of whatever Alm advances are coming up and basically show them how to take advantage of these Alm advances uh by showing them you know what are some of the new use cases that are unlocked what are some changes in like paradigms or abstractions that you should think about um that's basically what we want to focus on amazing and just to to be sure that people are like our listeners are understanding precisely what your company does you said that you allow uh developers to create applications that allow you to chat with your data and create agents that's that's correct is that like a very high level description of llama index yeah so let me let me give a little bit more color to that um so thanks for asking that question um basically llama index um the overall mission right taking a St back is to just it's is pretty broad uh is to enable any developer um to build just uh lowered applications and it turns out a lot of these L owered applications um depend on a user's own private sources of data now um the source of dat these sources of data can include a variety of different things um so it could include like unstructured documents like PDF PowerPoints docx files it can include like apis that you have access to um so uh commonly use like services like slack uh or Discord or notion um it could also include for instance uh structured databases like if you're a company you have a data warehouse of like very structured data um that's for instance the type of data you might want to get the LM to analyze and and give you insights over um and so there's a lot of data a lot of times especially if you're a bigger organization you have a lot of it and it's also in different types of silos and it turns out you actually you know maybe have some of it in use by your business analysts but also some of it is just like sitting in in a bunch of files and you really don't have time to like sip through all of it um one of the promises of llms at the same time is its ability to just rapidly process any type of information it give you back responses um to really illustrate this imagine you're in like the trbt interface and you're able to um like you just copy and paste like some piece of text from like a web page from like a book just into that little text box and trb or clad or any of these uis that are popping up and you realize the llms do a remarkable job of being able to really understand the text that's in this box um even you know if it's not in like some structured uh like field uh a structured Manner and uh actually give you back responses over and so you take that Insight that LMS are just really good at just reasoning over any type of data and then you combine it with okay well now if they're really good I want to combine that with like um you know on top of all the data that I have access to in the organization and you think about the challenges uh to basically make that happen um and so what we how we started as a company is basically building developer tools to enable that bridge um enabling any developer that was using an LM API to easily figure out how to make use of all the types of data and what the different patterns and paradigms that by being able to you know to to like to load this data into the allm pr window and you have to deal with different types of challenges for instance like figuring out how to fit context into that prop window um figuring out how to basically um get the Align to interact with different types of data interfaces uh unstructured data structured databases uh perform actions those types of things um and so that's kind of like how we started and that's where a lot of these like indexing like rag abstractions came into like kind of arose as well um because a lot of people were thinking about similar problems and they basically came up with this like general purpose technique uh like retrieval log Generation Um to figure to basically uh show to basically um uh load it a like basically set a standard for loading in different types of data putting into a VOR database and doing retrieval on top of that data to fetch only the most relevant bits and putting that into outet Contex my note um so as we've started we started off um focusing a lot on that use case because it was also a pretty big use case for the Enterprise as well um because um what what uh like what Enterprises wanted is one they wanted to actually have their developers like build these types of applications and for us as a company we wanted to give these developers like the developer tools to build these applications um and so the open source project uh going back to that is a pretty important piece of it because open um is something that most developers really enjoy using um it's m just like a set of libraries that you can just import it's very flexible and customizable and so the trick is you want to make sure that you have a library that both like beginner to Advanced developers can actually utilize um to you know VI that bridge uh connect your different types of data uh and build these different types of all use cases um and so um go uh one quick note on the different types of Alum use cases the the orchestration framework is uh pretty Broad it supports a lot of the emerging patterns that um people are building these days and so um one such pattern is again like rag uh which is basically question answering um over your data and then as I mentioned in the beginning um there's also other types of patterns that are emerging um like the autonomous like like software engineer sales assistant research assistant like basically things that can operate more in a continuous loop with a lower um like back and forth human interaction um and so that's something that we're focusing on as well and then just the last last bit which I'm happy to talk about in sub questions is the like um we have the open source project um but then we also have uh Adder press platform um which is llama cloud and so the goal of llama cloud is that it actually complements the open source project um by being a managed uh data platform um data ingestion service to basically help really solve that problem of connecting your data uh with any sort of storage system and be able to Define that workflow have it run very U be very accurate have it run reliably and then also remove the need for you as a AI Ur to really maintain those pipelines um this is an emerging systemy that we see um as part of this like overall data stack that forms like Al and software and that we realized uh for a lot of like ader prised use cases it makes sense to basically build an overall managed service around it to really solve that data quality issue for any sort of Al applications and of course you can still use the open source to build like different types of uh applications on top of that on top of that data amazing and to be sure that I understood that last part correctly in this orchestration framework that is L index you have rag which is retrieval augmented generation you have autonomous agents and then you have the Enterprise solution which is lamic cloud which which allows developers to more easily deal with the workflow of ingesting and managing data in these kinds of ecosystems right is that is that broadly correct yeah so the way to visualize llama cloud is basically imagine like a general knowledge man management interface um and so like this the goal of this platform is to just expose all the different types of data that you might want and make sure that data is cleaned and ready to go um so that like you know you're on structured PDFs uh we'll be able to process it make into a format that your LMS can consume um you know in the future for instance like structur data or any sort of like user data like conversation history um we'll also be able to process it and store it in a format that your LM can consume as well and so basically um assuming like you know if we take the rag pattern uh what does this concretely mean you have a set of like PDFs uh you want to for instance like parse it in the right representation um you want to chunk it um make sure that it's actually split up um into more bite-sized pieces so that LMS can really understand them um and you want to put it into a vector store with the right retrieval strategies uh to make sure that given kind of like any sort of question that the LM or human sends to the system um that's able to surface the relevant context and so that whole stack is kind of like the indexing retrieval piece of rag um and if we can provide that as a really nice data interface then you basically have the like set of managed apis that anyone within the organization can consume run and basically have this like guarantee of quality and that ingestion is usually one of the most complicated parts of the developing this kind of system right yeah for sure and so you can um just talking about that type of um spending a little bit of time on on talking about why it's complicated um we we've mentioned this in some of our kind of like U like talks as as well um it's basically you have um this new type of ETL um because you have like unstructured data and you want to process it into something that is a little bit more structured um and but it's a new pattern so for instance for rag um you know you're taking again like an unstructured document like a PDF and then you're parsing it and then chunking it and then putting it into a vector DB um and this actually looks a little bit different than the previous Stacks like if you think about traditional ETL it's typically just like a set of operations to move structure data from one place to another um like part of the reason like snowflake or you know some of these other uh data warehouses exist is you like load in a lot of kind of like messy data and then you typically use a human to basically write a lot of operations to massage this data into something that's very clean and structured um but again I think just like the overall stack has shifted because a lot of time like the the set of operations you need to do on the sound structure data is different for Rag and the second piece is that um the second uh reason why it's challenging is that all these parameters actually affect the final performance of your LM application so your parsing strategy uh your Chun size um it's actually there's like a lot of these knobs that you have to figure out how to tune um also adding like metadata annotations on top of your documents and um all of these affect your performance and so if you're not careful with some of these parameters you end up getting a system that um is uh just not very performant and it's also a little bit hard to improve um and so the data decisions directly uh affect your accuracy and this isn't really something that's occurred uh kind of in previous types of software right yeah I want to unpack a little bit more about the Lama index and L the L Cloud part but before that I would like to talk a little bit more about the history as well of L index because I think that our listeners will be very interested getting the inspiration from your personal experience right uh can you tell me how the company L index started how yeah um so I mean I think uh basically it started back in November of uh 2022 and um you know um this is around the time that uh or it was October uh that a lot of people were basically trying to hack on language models um because uh people were getting excited about like gpt3 again um there were a few startups popping up and I I was also getting pretty interested in just like you know trying it out seeing what happens as a developer um just to like hack around on the API and see what what are some of the some of the learnings and and what are things are hard to do what are some of the things that are easier to do and so so I I basically open up the open eyed devel account and then you know like um started playing around with h TX mry O3 um and then as I was playing around with it you know one of the emerging patterns I was I wanted to build and also I was seeing like some of the other people build as well is like you know how do I figure out how to use this to basically you know um uh answer questions about like uh you know all the sales data that I have in the company right um or just like have this have knowledge of basically everything that exists uh within the um the the organization um because what I wanted to do was I wanted to feed it all the sales conversations U because I was in a few of these like customer calls and I wanted to really like synthesize insights and help me prepare for the next meeting um because I otherwise that like that was actually taking a decent amount of time um on my plate and I was like on the entrepreneuring team U but I was just in a bunch of these customart conversations and I just um because I was bound a few different things often times like I I would just like forget or or just like have to explicitly allocate time to you know like listen to um to to review these like customer transcripts um to basically prepare right materials and so it was a it was like a slight paino for me that I wanted to solve with Allens but then I was like I was trying to build uh and prototype these applications with um initial like LM powered application and I realized that the context window is like 4,000 tokens right so I can't just like dump um like the entire set documents into the context bundo and call it a day um and what I needed to figure out is like some more clever indexing strategy so that like I could index all the documents somehow but then like you know given that I had some sort of question um the LM could somehow figure out how to Traverse this overall index of information um to basically find the right piece of information and give that back to you um so this was pre you know this idea of rag um and actually you know rag as a research paper had existed before I started this project um yeah yeah yeah but I I like independently just came up with some um technique that like to be totally honest was a lot worse than just this like idea frag because I wasn't using any embeddings um I wasn't using any text embeddings I was just trying to use the AL itself to just think about like hierarchically organizing information into like a tree um and then just like having that LM like Traverse that tree to be honest it didn't really work um the first few few times around um especially because like dentry 3 was not super good at reasoning um so it wasn't great at being able to figure out like um make a decision given like a set of choices um that said um it was interesting and I think you know the people like I put on Twitter and I think there was a decent receptive like uh uh response to it like there were a lot of people interested in the soall approach and I think a lot of people are getting just excited about LMS at the time and so provided like a nice starting point to basically grow some initial um have some initial traction um which then motivated me to keep working on this uh and in this like virtuous feedback loop I eventually realized a month later I should probably you know like um quit my current job and then also uh like think my already a company around it but really started off as like just this like developer Leed uh like I want to try to like solve this paino and this is the tool I want to build to solve that paino right so about that history of rag modern rag was actually kind of born in 2022 then right um yeah I mean I think like I forgot the exact date of the uh like the original retrieval augment of generation paper uh which is not by me um obviously was by others um and so this is either in 2021 or 2022 or even 2020 um and the like it basically proposed this overall idea oh you know you take in um some set of like documents and then you want to embed them like put them through an edting model um and then put it into some some storage system that's able to like serve uh like the relevant documents through retrieval right and so that's why it's called like retrieval augmented generation like you want to do like a retrieval pass over some storage system before you actually put it into the Allen um prompt so that kind of that resurfaced basically um as more and more people start building with lfs people like kind of start discovering that oh hey this thing is like a cool idea um and my initial version was not doing that it wasn't using embeddings um because at the time I don't know why I think it might have just been like due to like a design like my my goal at the time when I first started was not necessarily to make this useful it was to just like do something cool and so the to me like doing something cool was like oh what if we just didn't have embeddings or I thought about it briefly but I was like what I really want to do is just have the L1 um figure it out completely on its own right and I I I still think that would be a quite an interesting concept like instead of just like you know relying on a separate model just have a language model completely similar to like a human just completely figure out how to reason organize things and then also Traverse them via text um yeah and that kind of reflects in the current state of L Index right because I mean you it's kind of a central I mean as far as I can see it's kind of a central part of L index that you use language models as well in during the ingestion process not only in the in the generation process yeah so the default rag Paradigm um really only uses the LM at the very end um so you have like um inje like injection doesn't need LMS you just take in um you know some data parse it and then you just chunk it using an algorithm um and of course you use an embeding model um to put it into some Vector store and then the retrieval process uh doesn't use an LM because at a simplest it's just like top K uh embedding lookup so like you know you look up stuff by embedding similarity and so in a standard rag pipeline the uh gener the the way the place where lm's actually come in is at the very end uh and it's only responsible for um kind of just like synthesizing an answer from a piece of unstructured text and to be totally honest like you know it like even like at the start when we were just like implementing this um I thought it was a little basic uh and it didn't really like make use llm so its full potential right because like llms are not just uh for Generation um and and simple reasoning they can actually help you make decisions they can actually help you like add like greater layer of like just like understanding and decision making and so if you really wanted to make these systems more interesting you could use llms kind of like at the beginning um so for instance during the data injection phase um or you know during query time um instead of just using it at the very end for Generation use it for like query understanding um use it for uh like processing like evaluating like the quality of your retrieve context and then for instance like not only just retrieving from vector or actually using a variety of different tools um and so on the injection side um the places that you can use llms and so this this this overall concept is pretty interesting which is um the whole point of like inje is to process data for your Al map um and so that's kind of like uh ETL for LMS right but you can also use LMS for ETL um because you know LMS have an inherent capability of understanding unstructured data and transforming it and that part I I think is interesting like so for instance um let's say you know for each unstructured document you wanted to extract like a summary the table of contents um like you know extract like a set of like topics or tags for each page basically you can figure out a clever way to prompt the llm um by feeding it in a bunch of data uh from the document to basically first extract out a set of like structured annotations or tags and this represents like a data transformation basically because you're trying to like feed in some input on structur data and transform that into structured data and then you can basically attach those tags on top of the unstructured data as well and so these like this is just an example of like metadata extraction that's also powered by LMS and this is something that you know um uses LMS but is also indep like you know useful for just like the any sort of Downstream application you want to build because if you're trying to build a rag system over this having metadata tags is oftentimes very useful it gives you like better retrieval results better generation quality and and all those types of types of things and so I I think that interplay between LMS um and kind of like data transformation is very interesting because you can use it for like in the middle but also it you it helps for any sort like applications you want to build later on yeah that's fory some Advanced rack techniques over there and it kind of brings goes back to a little bit to your I mean it it's a little bit adjacent to your original idea of uh creating these kind of systems right yeah I think I think the you know I I thought about this in the at the beginning of the project that the project was not really um like close to kind of like realizing that Vision at the time um but if you think about like the overall uh picture of like where I think um LM power software will evolve it's basically like um there's a new type of like data like a data stack that's sub verging um a new set of operations within that data stack um to basically power like um like AI software and to really like um kind of uh like we want to basically provide the right tooling to help developers build that data set um and so this is helping them figure out how do you like you know um move data from one place to another um specifically for LMS to use um and that could includes like LMS in the middle as well and this also includes the orchestration piece on top of that data how do you figure out how to get LMS to interact with the data uh through these different types of uh interfaces awesome um I want to talk a little bit about your your your background your personal background as well um what did you do before uh you started Lama index yeah so I was at um I was at a series B company called um robust intelligence which was doing like ml apps like uh testing monitoring evaluation um you know it it was had a pretty talented team um you had like aison trace on your podcast and so we were all like co-workers basically awesome um at the at the same company and then yeah there were like it was it was it was fun I mean I you know um basically before that I had spent some time in both like uh bigger companies right um like kind of so cor was like 200 people Uber was like uber Uber was very big um but i' spent some time at both like ml engineering um and then uh and then research so let me actually just say you the whole background so so basically like Kora was my first job out of college and I was doing like recommendation systems and doing like very practial Hands-On ml engineering work um so that was a great first job I learned a ton um and I also learned a lot about just like how to not only like train these models but also how to apply them and basically make them reliable um and then I wanted to like really dive a little bit deeper into the research side and strengthen my ml understanding of fundamentals um and because I was very interested in training neural Nets and so I you know um did like the Uber AI like residency program which transitioned into like a full-time kind of like um uh research offer type thing but basically um I was working on like deep learning for self-driving systems and this included like uh different types of like image understanding uh sensor understanding um and also like uh planning right and and uh some of these Concepts as you dive into it they're starting to become some similarities between like kind of um like planning in the traditional like deep learning sense and then also like LM planning um and you know there's some patterns that are that are pretty interesting to explore there but basically I was I was um very interested I worked a lot on like sensor compression so how do you like compress images and sensors um using like the relevant algorithms but also using neural Nets um and then I worked a bit on like robotic clanning as well so being able to figure out how do you like make the right decisions like if a pedestrian is crossing the intersection you obviously want to make sure you break how do you predict their behavior and make the right decisions there um and then after that I um had worked at bigger companies and I really wanted to see what like an early stage um eops company was like and so I I joined robust intelligence um and robust intelligence was like um it I think I joined as like the ninth or 10th employee um so pretty early on and you know um like I wanted to really understand what it meant to one build something in the ml space that actually could make money um because as part of a business and then two is like really understand the growth journey of an early stage startup and I think I actually got a bit of both like I mean um like I I learned how to um really aggressively prioritize things um especially in a startup environment when you have like limited time and resources um and it's a little bit different than like the academic environment where uh kind of you do a PhD for instance you're supposed to you know like spend time like really reflect on kind of like wait what to do like build build something that's deep and meaningful um in a startup you don't have a lot of time and you really have to just make sure you like aggressively like 8020 like the the very first thing you want to build um and then you want to make sure that like um you're able to kind of like build initial versions of something before like diving into something more uh more deep and complicated um so I learned a bit of like you know that type of uh fundamental tradeoff the um ability to like lead a team inter interact with product a little bit of like being on these customer conversations as B I think it actually gave a pretty solid base for just like um kind of starting this company later on um I think you know in general you know if you're like a listener and you really want to like start a company I think actually a good um like training ground is to just work at like a a um like a like early stage company with very talented people uh people that you respect um and just like really just try to learn as much as possible like inhale as much and fres shot as possible in the first year or two yeah it's some amazing advice I think so would you say that as far as I understand Mo pretty much all of your experience is in machine learning um had you ever considered going to the software development side or not really you were always completely uh on the machine learning side yeah so I think as a um as was a machine learning like engineer basically um at both quora and um not at Uber but but at um Ara and and for both of these jobs I think it it does involve a decent amount of software engineering um I was more on kind of like the yeah but but obviously it's it's a little bit different than say like pure infra engineering like I I didn't I never spent like a a significant amount of time work on like distributed systems for instance or kind of like doing deployments and you know like networking like all a lot of like different types of Concepts and like traditional software engineering um I did internships like uh you know throughout the the colleg years so I built up like some some Foundation but in in general I think um Mach like the intersection between machine learning like engineering and and product was where my primary interest um uh like was like the most right what did you study to to start taking up this roles actually yeah I um I studied computer science um so I graduated from Princeton um back in like 2017 um did computer science uh and actually I got into ml pretty late um I maybe that was one part that um I didn't drop in the initial um like introduction to the background is I I started doing machine learning or or really thinking about it like the latter half of like my third year on college or junior junior year um and so a lot of people who are interested in this like especially nowadays like AI is so big um you know if you're like a high school student or something or middle school like you you started like looking into Ai and and you start like trying to understand what it's about but like um at the time I most got interested in AI because it was like junior year of college I saw like one of these example new models that just come out came out that had like really good kind of like image generation capabilities for its time um it could generate like a little icon um like 64x 64 U and it actually looked like sell realistic I was like oh that's cool I want to learn more about that um and that kind of kicked off like my academic interest in the space and maybe realized like this is kind of like an overall area I'm just like uh I guess computer science or computers that I wanted focus on and really understand a little bit more awesome um and that that when was that by the way like in like this is like year 2016 2017 yeah I mean I think it was um like I graduated in in 2017 and I think imet had come out like a few years before that like I mean I think there there was like some initial um excitement in in deep learning um and actually like to be honest I think if you were in the like ml research world uh there was like that hype had basically started around like 2014 2015 um and basically continued and then for basically everybody else for anyone who didn't have exposures like deep machine learning I think that hype kind of took off with like tragic and all these but among like the research Community a lot of the advances and kind of like the understanding of the progression of like uh Ai and results had been um have been going on for a few years right so you were pretty lucky to be like there before actually everyone else was was there I mean I think one of my biggest regrets is honestly I I wish I had just gone into ml even like a little bit earlier if I had like known about like really what it was or thought about it like freshman year it honestly was like pretty interesting um I I probably just one didn't have the right technical fundamentals to fully understand it yet and two like um I just uh it was like what I just like discovered it too late I think I'd always been interested in startups like I'd always been interested in like building something but that typle interest and and machine learning just um I just never really discover until later on awesome um and well and now you have built Lama index and it's gone super far and now you're implementing new products to the to the public so let's talk a little bit more about lamma pars and L cloud and U um yeah just tell me what is lamma pars yeah so um kind of taking a step back uh you know we have this like orchestration framework um we had lot we had we have had llama index which has been around for about like a year a year and a half um it's fully open source it's going to continue to be a core part of our strategy uh we want to make sure that you know we build the right tooling um to enable any developer to build these applications um both when they're prototyping and also when they're productionizing um I think like the overall motivation is that as we started talking to a lot of these like Enterprise developers we realize that um they're running into just like General Pain points right um to in in building something that was production quality um the response quality wasn't high enough they were having a hard time trying to improve the application um they were running into like hallucinations they they were having a hard time like trying to connect to more data sources and so fundamentally when we thought about kind of like a managed service to build we wanted to build something that actually uh solves some of these pain points especially in an Enterprise and production setting so um the overall like uh summary of like the Lama Cloud platform is it's meant to be a over like data platform that is going to process and clean your data um to the best quality for LMS and so um we think that like you know good data is essential for good like downst performance with any sort of software that you want to build um if you don't have good data quality that typically leads to all these issues that I just mentioned and so we wanted to build some overall service that can basically parse index um and clean and also let you retrieve over this dat up to give you back the best interface for your lb to work with to access any context that you have within the organization um so that's the high level picture of LL cloud and it consists of a few different uh core components the first is a set of like data connectors uh to basically load from any sort of um data source that you have so like unstructured data um like any sort of documents um and then you know we're we're building these Integrations for like semi-structured data and structured data we actually already have a ton of data connectors on the on llama hub um which is our open source like purely Community drippen set of like data connectors and we're actually working to make that all those available um within the Llama Cloud platform as well and so um llama like uh llama Hub like the open source uh integration ecosystem has been very powerful because there's actually a lot of really good stuff out there and a lot of that has been like Community contributed but anyways that's step one so like connect and load your data two is actually having proper ways of parsing your data and this is where specifically llama parse comes in um and so parsing your data um in this setting really means like you have a PDF how do I actually like load and extract the information from this PDF that basically like maintains uh kind of like the original structure of the document and also maintains all the content and so this includes like you know often times when you load in these documents it it's um a lot more complex than just like a pure wall of text you can have like tables you you can have charts you can have like uh complex like spatial layouts you can have embedded images as well um and so I think about like PowerPoint presentations as well where it's not really just the text that matters it's like the set of like objects you put in it and also the spatial uh positioning of everything um and so if you tried like traditional parsing techniques it they tended to do very poorly on um on these on on these types of documents like if you just took an open source out of the box solution so LL purse what we um had an insight into is you know like just roughly speaking it's powered by like Tren powered by LMS but we we have like an Insight that we can basically kind of like build a differentiated platform that can really like parse and extract information from these documents um in a way that's specifically geared towards getting other LMS to understand this data um and so LL parse itself exists as part of llama Cloud but also um as a standalone AP that you can go ahead and use um and so uh after you parse your document then you want to you know like process and and embed it and do a bunch of Transformations before putting it into your um your your uh storage system and that storage system basically represents like an interface that your LM can interact with so the um storage system is you know a vector database and the processing and betting include at a very basic level like all the chunking strategies that that you want to think about when you like split a piece of like process data um and then embedding of course is actually generating like a factoral representation for each TR um and so there's just all these different steps um kind of similar to what we talked about at the very beginning um toward to making this like type of ETL uh possible and we want to make sure that we provide um you know Enterprise developers with the right interface to basically perform the this type of like data cleaning uh Transformations and retrieval um so that it basically saves like the developer time in building something that's production quality um and um can basically abstract away a lot of the data processing complexity and as a result the the AI engineer within the Enterprise can focus on the um kind of like the orchestration logic on top this data all right um just to be sure that I didn't get lost there you told me there are four steps where Lama L Cloud um helps developers you talk about data connectors that live in Lama Hub you talk about parsing what what were the other two yeah so so um the three or four steps roughly is um uh data loading um data parsing um so data parsing is through lva parse uh data processing um so being able to transform that data um and then um I guess like data storage which is like putting that data into a vector store so lava Club we're not like building our own um Vector store but rather we're leveraging the fact that you know we connect a 40 plus uh Vector stores on the open source um and so we're able to basically process and transform and put that data into a storage system of your choice so developers can use pretty much any Vector store they want with lamac Cloud exactly does that mean that the data is going to be stored within your servers or not um that's a good question so from like a systems architecture standpoint um we like it it exists you know kind of like in a uh like initial State as um like a service but we don't store NE like that data at rest especially like the source data uh we might store some like metadata or annotations but like we move data from your data source and put it into like the storage system and and usually there's like API interfaces to connect to your storage system um for a lot of like larger Enterprises uh we're offering like kind of byoc options um so that we're actually able to deploy that in your uh Cloud VPC so then it's collocated with like this the both like the data sources and also like storage systems that you're using so they can have lit Cloud within their own premises as well MH amazing um talking about the first step of this process that you mentioned data connectors um you talk about Lama Hub which if I understand correctly it is a Community Driven hub for data loaders right what yeah what would you say are your favorite data connectors if there are so many are there some that were kind of unexpected that you would see yeah well so we have a bunch of fun ones um I think we have one that like you know just the reads from uh different types of websites um I think there's one that reads like I think it was either Hau or something anyways so it was just like some some like uh like fun oh there was there was like a dad jokes reader I remember and like reads dad jokes from this website anyways so so that's obviously you know Less on the useful side and more on just like oh there's like some fun Community contributed ones the most useful ones are the um document loaders this is uh Microsoft SharePoint um this is like uh S3 this is like uh like GCS um the probably the predominant way that people are using llms on data is through um processing like uh indexing documents um and so usually their documents are stored in some sort of bucket somewhere um right so this also includes like Google Drive um Azure blop store uh like basically all these different types of uh document stores Solutions the next most popular ones are probably like the common software systems that that we use um and this is probably more common for individuals or startups um but like notion um slack uh uh Discord those those are those are super popular um basically I and I think the reason for this is just like you know if you think about just where is information stored in like an employee or individuals like dat to-day use um it's typically in the form of like one files two in conversations and three in just like any productivity tool that you use right and so that that's going to be the most popular data sources awesome and then the second step that you mentioned is parsing which for which you have LMA pars just to make sure you you mentioned that you can use lamma pars as an API so you do not need to be using Lama Cloud to use LMA right yeah um so this um yeah so llama parse itself exists as a standalone API it's also natively integrated as part of this overall LL platform if you want to Define your indexing and uh in Resturant workflows so llama parse itself um as an API um you can upload your document like a PDF and you can get back like a pars markdown representation um and what this will give you is it's very good at yeah like extracting out like uh like tables um images embedded objects from this document you can actually input here's a special feature of w parse you can actually input paring instructions um so in like uh this is not really the like you don't really see this in other parsers you can basically inut a prompt you can like you can say you know please extract out all uh paragraphs as bullet points um or please extract out or like for every piece of text like tag it with a pat number um you you can do that and and I think it will probably work if you try that um and so this is um the property of the fact that you know L purse is powered by just like J out all right so I mean to be clear Lama pars allows you to ingest pretty much any kind of document be it PDFs be it um slides from PowerPoint presentation or something like that and get a structured or a text representation of all the information within your document right exactly yeah that's a good way of putting it um so it gives you back like a structure representation um and it handles most common types of documents um I think the the one one it doesn't handle which I'm pretty open about is like Excel sheets um just because Excel sheets are are like its own Beast um and and we're still try to figure out the best way to process it right and how is that different or better than just performing OCR in your document so OCR is like a part of it um but if you think about it like um there's like um there's OCR on top of like a scanned kind of like receipts or or image and and that type of thing um that that's actually part of La parse but there's other parsing complexities too so for instance let's say you have text in like a two column layout um and you also have like headers and Footers um basically like we not only perform like kind of basic OCR we also um do some general like algorithms uh to basically handle like document like reconstruction and processing um to make sure that like the spatial layouts are preserved and that we're able to tag it with like the right sections um the other thing that we do is like um like for instance for like tables right which is still represented as like text um there's like some special care that needs to be taken to basically make sure that the tables are kind of aligned so that like let's say you have a bunch of row and you have like a set of columns you want to make sure that like all the numbers in the column are preserved because often times what you see in some of these parsers is like the numbers get shifted and they get mapped like a different column or something um and that leads to obviously hallucinations because then the Alm doesn't understand what's going on um and what we found I think an interesting thing is if you can somehow solve this problem and you can somehow basically you know get the text to appear in like a nice representation it helps um helps Alan understanding and actually reduces hallucinations too awesome and that's um so that is Lama bars basically it allows you to perform OCR plus a like a real understanding of the contents of the document included images and tables which are surprisingly difficult to parse when you're not using a system that like this 100% And and just a brief note on like why why are we doing this um the goal wasn't just to create like a nice PDF fser I mean it's it's pretty good at PDF forsing like to to be totally clear it's it's actually very very good and that's why I think a lot of people are interested um the overall goal is to make sure that we can tie this into this overall story of giving you like proper like rag or like context uh without hallucinations um for so like our goal is not to build in like all the features that a traditional PDF server might have our goal is to build in um specifically the right techniques uh so that we can basically represent your data in the right way so that alms can understand it and are accurate in synthesizing uh you know information over awesome and yeah it goes back to what we mentioned at the beginning using llms also during the baring process which is kind of what you were trying to do from the beginning yeah so that that that's a good point um so actually you can so um you can have llms as part of this like paring process um you get have LMS as part of the transformation process um like the kind of like chunking the metadata extraction all these things okay um like extracting out of structured outputs and then and then and then all this is in the spirit of like actually feeding this into some more LMS that go and like do rag over your data and and like perform like a gentic workflows over your data too so yeah you can have LMS throughout this entire process and it's very interesting to see where they can be applied and how they can all work together awesome all right and is okay and so Lama just to be clear Lama pars is already open to the public can people start using it now or not yet yeah so LL parse is available uh like basically to the public you can actually sign up for like um uh there's like a free plan as well as like a paid plan um we have a pretty generous allocation you got like thousand Pages a day um just for free and then some there some like usage race pricing um and then llama cloud is uh kind of in like a private preview mode um we're hoping to open it up a little bit more um very soon but in the meantime if you're like an Enterprise that's interested in building like theform Rag and a team based way and you want to you know like save ENT your time on on connecting to your data and processing that come talk to us um and you very open to these types of conversations awesome and okay so we covered the first two steps that you mentioned the processing part can you shed a little bit more light on that how do you use H H how is the processing part smart or smarter than something that you could code uh yourself so I think for processing I think the interesting thing about processing I mean it's pretty related to uh some it's actually related to ing um it's also related to some of the stuff I mentioned before which is like how do you just extract out structured information from unstructured data um and I think like metadata extraction um I mean I guess that's an instance of the structured extraction but roughly speaking if you're able to do that then you're able to attach or augment the existing unstructured data with the right tags which then allow you to better um search in query and filter for this type of information because instead of just doing semantic search by embeddings you can actually filter by like a SQL statement or you know you're able to filter by the specific tag or topic and so I think the way you can use albs to do like structure data uh extraction is you can basically um at a very basic level feed in text into the input prct window of the Ln and ask it to generate like the set of like structured information that you want um and so if you're able to do this uh scalably and reliably over like a large Corpus of unstructured documents then you're able to like generate structure from these documents um and so the different types of structure include uh like I mentioned like kind of simple annotations they could include summaries they could also include relationships between documents um this where start getting into like knowledge graft territory where you actually not only look at kind of like the information within a document we also look at how different concepts um or or I guess like um yeah how different con how the same concept across uh um uh different documents and how you like tie different documents under the same concept and um if you're able to do that then you're basically able to kind of construct this overall like structure or ontology over your uh document set um and of course you can and probably should use LM to do that um after a certain point because to do this uh purely by hand is like super timec consuming and then and then once you have that structure you can leverage like Advanced retrieval index uh like retrieval capabilities over that and your testing I mean I suppose that generating this smart metadata and associating it to your nodes or to your documents in your vector store that improves frag uh by a lot right I suppose yeah so um this is on a very like this is a little bit technical but there's a few ways it can actually improve um generating metadata um directly plugs into metadata filters in a lot of that's implemented in a lot of VOR DPS and so um what metadata filters allow you to do is basically um filter by like structured tags um in addition to just like top case antic search um this gives you more precise retrieval results for instance if you specifically know that you know you want to search within a set by this topic or by this state or you know like uh lower than this value um you can basically imp like do that and then you're guaranteed to only have uh results that satisfy those constraints that you set out um so um this leads to more precise retrieval the the other two pieces is having tags actually helps your embedding representation a little bit so for instance let's say you feed in a piece of text to an embedding model um you could feed in just the text itself if you add in some additional context from like you know the overall summary of the document or like camic annotations like the the name of the file or the page number those types of things um that can actually help your embedding representation and really good embedding model can take that information to so that um when you ask a question relevant to the stock it's more likely to be retrieved if you had this metad dat then if you didn't um via like semantic search use sorry yeah go on oh sorry no I just mean um finally it's like a similar thing um this also helps with adding the right context when you feed this into the LM because in LM like if you um let's say you didn't have a summary of this document would just have less context on what's actually going on and so generally speaking like the more context or um you can basically ground the LM in with more information so that it's more likely to generate a result for you um that's what you want um and doesn't awesome is it does it not explode the costs of um ingesting your your data how do you deal with that if you have a lot of data in a company for example yeah I mean I think uh you bring a good point I think uh well so metadata itself can be both like llm and human generated um and so if you have like too much metadata that's also a problem because then like every piece of text every document just has like a lot of metadata and and that can actually degrade performance in many ways I in both accuracy and also costantly Andy um so it is a Troff that people have to consider um the other piece is like yeah if you use llms um always to basically just like kind of um extract out like structured metadata from all your documents that is going to uh cost a bit of money especially if uh you have a lot of documents and you're using very powerful models I think my hope is that one um the model um the the well as context Windows get longer um and as the token costs come down and as models get faster hopefully this problem does um like get alleviated a little bit over time um and then the second piece is that you can do some basic optimizations so that you don't have to fit like the entire text in the context onee if you're trying to do structure data extraction you can still do like vector search uh retrieval um to basically do structur data extraction and the trick is you just got to make sure you find the right bits of text um you know um basically you like pre-index the text and then retrieve just the right bits this is very it's basically a rag pipeline right um to just like extract out like structured information as part of that indexing or part of that like uh like processing so awesome it it really sounds like an amazing service the entire uh Cloud suit that you have um by the way I I also had a question about the embedding models that you're using over there um are the developers capable of choosing the embedding models that they that they're going to be using in lamama cloud or do you have your own embedding models that would make it more costly to migrate later on oh that's a good question I I think right now we integrate with roughly off thee shelf embedding models um I think the idea of having like custom fine to an embedding models is very interesting one um I think yeah I don't I don't know about like training one from scratch um that one's interesting to consider I think the thing that I'm interested in is like if you have a eval data store have human feedback over time uh you could potentially have like a fine tuning layer on top of your Artic model right awesome I don't think we we talk about it enough but I I mean the part of passing instructions to your parsing model that's that's just amazing uh I I suppose you have only ones who do that right I I think we're the only ones that do it right now um though you know like others might might do it it's a cool it's a cool Nifty feature right I think if you're if you're a listener I definitely encourage you to check that out and um yeah give us feedback you know we're always uh kind of hammering out like all the longtail like kind of nits and bugs and those types of things in in in the parser um and so we're very receptive to just like helping to sharing your feedback and inting the see more that's amazing amazing um also um I mean talking about that as well do you have any more uh services or products that you're thinking on bringing or is that your main focus for now um that's a good question I think that's the main focus for now I think the two main goals um for this year is basically uh one um uh expand the open source um and I can talk a little bit about that and then two like expand lava Cloud um and so 50% on each one uh and you know we basically have our team kind of like the most a lot of the team works on both um but roughly speaking you know they they work like uh the overall allocation is like 50/50 um in terms of the open source it's basically about continuing to make everything production ready robust um and basically um evolve with any new use cases that are emerging um and so being able to stay on top of like all the latest LM advances integrating with these models um building out these use cases and then from an Enterprise side is basically kind of uh it adds quite a bit more layers of complexity like adding in um really the product maturity to make this thing a scalable reliable service that's like Enterprise grade awesome and about the open source part open source projects sometimes tend to be monopolized by super big companies right do you see I mean how do you see the future of this open source um sorry not models this open source Frameworks because do you see Google or meta U jumping into the race with this like they did for example with react or how do you see the future of this industry that's good question I I actually um I actually don't believe asly in that and and part of the reason I mean it's Al always possible they might like for instance Microsoft already has a few Frameworks they have like um santic HL they have autogen and you know like some of them are actually doing pretty well like I think autogen has a decent number of downloads I I think the main thing is like um first of all all these companies are creating their own models um and uh part of the valy prop of being a framework is that you're completely agnostic to the model um and I think like the there's just always going to be like a little bit of like an adverse incentive so that they're less like Community friendly than literally like a startup that like doesn't care what model you use right and we actually just want to integrate with everybody um and so I think we take that perspective for basically all the models um all the VOR stores kind of like the set of like Integrations um like all the data sources for instance like we want to make sure that you have access to all the data that you want um and so it's just I think a little bit harder for a larger company to be able to kind of like uh build that type of ecosystem uh when they specifically have like one type of model or VOR something that they want to push yeah because they're going to be biased towards their own products yeah right yeah I mean that's some pretty interesting insight about that um let's look a little bit as well about um what you see in the future of the industry and some advice that you could give some other advice that you can give to beginners in this in this space let's say for example that you were starting off today like you're you're you're not a machine learning specialist you're a software engineer and you want to start working in this industry what would you do first yeah so um I mean not to like um like hype our do our documentation has like a nice sequence where basically it's like I would start off like the with the basics just like do the Prototype first do the thing that takes like five lines of code just so that you actually see that this this is like um like like the overall use case that you're building towards um and potentially see why it's interesting and then once you start with the basics then you start going on like a little bit more of a guided tour through some of the components let's say to build a rag system um I would try to like um as you start at the surface layer start to go a little bit deeper into each of the different components uh really understand a little bit more about how they work um and then two build in some like development principles around like iteration and evaluation I think the maybe the difference between um AI engineering and software engineering is that I think in software engineering you know when you want to like test a piece of code you're very used to like writing unit tests or integration tests and that generally speaking you know like the fun the the the types of logic you're writing are not super complex and so it's pretty easy to reason about like the overall Behavior or at least like what the outputs inputs outputs should be I think the difference with ML is that everything requires some sort of data set because the function space is like super complex like if you put something into an outline you really don't know what the output is going to be and so you need to make sure you have the proper benchmarks and ways to like test these sastic systems um and so like um having like some sort of eval approach like we have e modules you know in llama index we integrate with like a ton of different eal providers that'll provide like really nice services around like tracking these metrics and experiments um having that is pretty important um and then like from there basically work your way up towards like scaling to more data sources like if you want to scale them more like uh documents for instance like you can use like w parse or you can use like anything that you want really um and basically uh like start to add in more layers of complexity with like that kind of eval bench mark fixed so that you can make sure you're not like introducing any regressions in performance um and then as you start with the basics um I think I think rag is probably the thing you should probably learn about first um just because I think it's it's easier to learn than agents um agents can get really complicated uh and and like even I'm like confused sometimes about like uh what exactly am I supposed to be doing in like the specific module um and like and there's also high level agents there's low level agents there's multi-agents I would start off with rag ra rag is a lot more simple to digest um and it's it's not simple either you can go in and go into a rabbit ho but at least like it's easier like we saw yeah but like you you learn that piece first um and then you start kind of like uh adding on like uh some ingredients or layers like memory um the ability to do uh like uh multi promp like pipelines like chains all these types of things to to basically freeze and over things um and then you start getting a feel for how to build more complex systems right and um about about agents in particular right so that's I've heard a lot more people talk about agents recently then let's say when when GPT 4 came out or when gpt3 came out right um how do you see the future of agentic behavior in the sphere do you like what what is the future that you see with agents even though it's pretty early on in the sphere yeah I think I mean it's super exciting I mean I think like Andro for instance been tweeting about it I think like people have been building it and it started to work I think uh people started like building stuff last year with agents but at the time it just didn't really work super W and I think right now it still doesn't quite work super well but at least like it's starting to kind of work um so examples like uh like all the autonomous coding assistants like you know jupy pilot Devon um uhu agent um I'm forgetting some of the names like open Devon like some of the projects are popping up um to basically be like a worker for you that can operate independently um and autonomously as opposed to just like you chatting with it back and forth um and I think like people are like things that are cool that are really cool and this is an example of something that's really cool and then once they um uh afterwards they want to learn how to build it and so I think there's a revived interest in how do you actually build like um scalable reliable agents uh and I'm really excited about it I think we well like llama index on the open source side right um I know we we talked bit about like L floud we're making a big push like agents um just like better agent education abstractions materials resources um and like I think there's going to be like two ways that people build agents one is if you're a developer that doesn't really care as much let's say you're just like less technical or or you you don't like really need to understand the fundamentals there's going to be high Lev interfaces that you're going to be able to create agents and I think like for instance um this isn't like quite what I'm uh like the like like an example of this is like autogen or crew AI actually like if you take a look at the interface I think it's like um you can just Define like the rules uh and kind of like the overall task you want to solve um create like the network of agents that you want and then it's roughly equivalent to like pressing a button you just press like go and then let them like figure it out themselves and to some extent like you go even high than that it feels even less than like programming um because it's kind of like um you can do this through like a low code UI basically right just like kind of like Define the set of things you want to do and then just have them figure it out um and then like on the very like kind of other end of the spectrum you have people basically building Agents from scratch or kind of like writing in all the logic like all the kind of like custom like prop flows like all these like decision points and I think that's typically for people that are more advanced and really want to like have like um have control over like what uh this thing can do um but of course it takes more time um on the flip side you a lot more control over the logic um and I think like there's going to be a funnel of people going from like building these like high level things that can just like go off and do stuff uh to like um kind of more advanced users that that um can really customize it to their use case that'll probably make it easier to implement them in production right because I'm not sure if we're going to get to a moment where we can deploy to production things like these orchestration systems for agents where pretty much they just figure things out by themselves it's I'm not sure how to I mean yeah how easy it is to get to production with those I mean I think that's an open question mark I think um I think everyone's like scared about it right now um because you know obviously there's obvious concerns like okay well obvious you have like no proper guard rails you don't really have a way to inct control like those types of things but assuming you were able to Big in some of those interfaces almost like out of the box by default I could imagine like if you just have a very opinionated abstraction that somehow guarantees some degree of reliability then people would be willing to use it awesome yeah and that have you seen some that kind of uh systems being deployed in production already like agent swarms or not yet not really um however I think a lot of people yeah I think I think you know there's all these like stars popping up but yeah yeah and of course you can build agents with Lama index yeah and we're Mak that part so um yeah we're making that part a lot more uh comprehensive and and robust and specifically I think the thing that I think oftentimes people don't realize you can build like your own custom ads and l n you can basically actually do whatever you want you can use out of the box uh abstractions that we have we like function calling like some cool new agent implementations and of course react uh the one thing we don't quite have right now which is coming out hopefully relatively soon is like multi-agent stuff uh so being able to have like good ways of setting up like multi-agent communication um uh like being able s asign like a as you said like a swarm of different things um and I think that's something we're we're working on hopefully hopefully we have like an exciting release you know awesome awesome amazing and um to finish I also wanted to ask you is there any way or opportunities that people can contribute with llama index to make it grow and to maybe give you feedback how how does that work yeah I appreciate you bring that up I think um we love contributions we love the community I think we're always open to more types of contributions um I think there's different types of contributions I think the most uh a very um classic one is actually llama Hub so llama Hub is not just like data loaders it's our entire ecosystem of any type of integration and so this includes lm's Vector stores embeddings agents uh like different templates that you want to contribute we're always looking for new types of like packs or templates or new types of Integrations um so this is like a pretty easy contribution interface everything is version track and so if you uh contribute something you can pin the dependencies to make sure it doesn't break later on um and so we spend a lot of work trying to make that really good um and so uh llama Hub is is would be a great place for contributions and then the other is I think as people go through the project they submit like kind of like bunk fixes like uh Nets like documentation fixes all these things are are obviously like like very welcome as well and um one last question if you were starting off in the industry right now and you wanted to create a company what kind of companies would you think of creating yeah I um probably I mean probably L related I think yeah know it's it's interesting I I um I'm trying to think like to be realistic I think there are going to be C you can start any company you want as just like certain companies I think are going to be a little bit hard to start than others right now um I think like uh if you start a foundation model company I think that's going to be a little tricky um unless you have like a differentiated view on like Foundation models maybe it's like super domain specific um for instance I think like um I think infrastructure I think there's certainly opportunities there I think certain parts are maybe like a little bit more saturated than others um I think like EV valves for instance and observability is a little bit more saturated uh to to to be totally honest because there's like a lot of these types of companies um and I I think like if you were to build something there you'd probably need to build something more debain specific um and then actually I think this is probably the general Trend and then I think you go into the application layer which is inherently like a little bit more maybe like um kind of like domain specific uh and you can you know like um you can like try to figure out how to build like General like um like applications that really solve and use cases very well I think if I were to build a company today I probably uh and and um okay actually it's not necessarily something I would build it's just like an idea um I probably think about building actually an agent that actually works really reliably that can solve like a critical business workflow really well um I think like I think similar to how dein I think had like a really cool demo for like you know a software engineer which to be honest a really hard problem if you just built like kind of like an agent that can go ahead and actually like perform like a sales task really well uh or for instance like perform um like uh kind of Ops like assistant or even legal work really well I think that's something that I'd be really excited about yeah that would I mean so many things that they probably going to revolutionize every single industry on Earth right now um all right well I don't know if you have any other comment that you would like to make something that we forgot to mention no I mean Alandra thanks so much for giving me the opportunity to uh and and asking all these questions I think we've covered both um the open source as well as like uh some of the um the the Enterprise aspects so L index the open source you know broad orchestration framework um always improving welcome to contributions um and then uh Lama cloud is you know the Enterprise data platform for kind of like data processing and intruction um and then oh last bit is um I I mean um if you're interested in uh kind of like uh AI engineering and those types of things we do have like a career swarm so feel free to just put your name down so PR uh just a quick call out there but no thanks for thanks for having me on I think this is a fun trat and for it's been it's been amazing and thank you for for all your detailed explanations of everything that I that I asked you about sometimes I was a little redundant probably at some questions but to to make it as clear as possible for for our listeners and for me 100% yeah no I appreciate you asking doing that because I I think sometimes I just get in this mode of like kind of um uh like assuming that people already have like context into things and I think it's good to like take a step back and really go dive into like building stff ground up amazing well thank you very much Jerry it has been an absolute pleasure talking to you pleasure talking to you as well [Music] [Music] m [Music] [Music]
Info
Channel: Alejandro AO - Software & Ai
Views: 4,232
Rating: undefined out of 5
Keywords:
Id: imlQ1icxpBU
Channel Id: undefined
Length: 76min 7sec (4567 seconds)
Published: Wed Jun 05 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.