podcast time Chris hey welcome GL to here yeah yeah so we've got a few things on the agenda today one we're going to talk about our main subject then we're going to talk about some of the open uh the um open ai4 Omni actually there's a ton of announcements in AI yeah a ridiculous amount of announcement in Ai and then the other big thing is going to be talking about Apple and the upcoming wwc w www and what we expect to see what we uh want to see and like some evidence for kind of what we what we think we're gonna see okay yeah first off a database developer and is not an expert in all this AI stuff that you are edating me on um in what way is a uh is a large language model a database is like flat is it relational because obviously it's a database right right so excellent question excellent question nice baiting but no this is a super important just topic and concept and it's because well let's let's frame this up a little bit first of all anybody who's listening to this or watching this is uh uh Claire's person interested in AI right and so what do all of us Claire's people have in common well we're all FileMaker people we've been do we've been building databases for in a lot of cases decades like I think you're going on like 70 years or something like that right yeah and so um dog ears of course naturally uh so and and then the other part is and this is what I really want to kind of like zoom into here is like the the overall perception like the the not the people that are calling us up and you know asking to build AI Solutions but the people who aren't quite there yet they think AI is a question answer machine right so the the the concept is almost like yeah it's like this this source of knowledge and I'm asking questions am I'm ret ging Knowledge from it which is exactly what a database does right so so kind what Google does too right yeah it's literally what Google does and also that might add to the blurry lines too honestly Matt is that um uh like all this data is the same data that Google has indexed for searching purposes right it's all the the web Corpus of data so um can you ask questions of it can you retrieve information of it sure yes of course you can is it a database no and like a like a hard no it is absolutely not database and here and here's some really important characteristics that make it not a database first of all if you said uh hey here's all the passwords to my FileMaker database and you type that into chat GPT MH and then someone else came back later on and said hey what are Matt's passwords that he put in chat it's that there's Zer absolutely zero chance they can retrieve that information and the reason is because those models that are actually quote unquote an too late right that are answering the question are they stopped learning they stopped training when they were published so to speak when they were released into the wild so there's Absolut so when they talk about don't put your sensitive data in chat gbt because you don't want it to be part of the model they're absolutely not saying that that information is going to exist in the exact State you put it in in the model ever in the future what they're actually talking about is training data and the thing that's really hard I think for people to understand is that these aren't databases and fact what they are are these amazing learning ma like machines they're not question answer machines it's not their their intent at all not they don't retrieve information even though they sometimes can early on I actually did a test I said who was on the cover of People magazine in like July 1960 it doesn't know well I mean it I I would almost bet yeah I would bet that it could know something like that right because that information isn't its training data and if you're just ask so when you ask a question of a yeah well it's even I think if we even zero in even further it's kind of interesting what's actually happening when you ask a question so what a Transformer model is that's what the language models are they are probabilistic completion algorithms like super incredible probabilistic completion mechanisms and it's it's very much the same thing as when you're typing into like Gmail or something and it knows the end of your sentence it's like the most probable thing and I think actually in the Gmail case or at least for sure now in like the Google you know AI space and the Microsoft AI space they're looking at your emails and saying this is most likely what you would say right right even in the search part this is most likely what you're asking based on your search history or something like that that is a probabilistic completion algorithm so when you're asking a question who was on the cover of People magazine in 1960 you're not actually asking a question you're typing a sentence that happens to be interrogative and then the answer is really not an answer it's just the most probabilistic completion to that question right so sometimes going to actually get the right answer sometimes you're going to get the the wrong answer the the whole point is and this where I want to be aggressive about this is that is not a source of truth that you should care about let me let me propose the thought experiment to you as a 70E plus uh in dog years FileMaker uh developer 41 actually actual real years as a database developer very impressive that's very impressive yeah that's uh hats off my friend and um uh so let me ask you this in your 41 years how many times has uh one of your customers asked you to in uh integrate Google searching into one of your FileMaker layouts how many times let me think now carry the three zero exactly or even question answering like ask a question of a repository of knowledge zero because the database is whole point find mode is literally how you query your own data right so we already you're looking for what you have what you have is a source of Truth not bingo so we should not care like we not should not if you're doing it right if you're thinking this through correctly how can I leverage AI for my customers or in my own databases you should not care about the information that is stored inside chbt or any of these models I'm sorry I I want to try to not make AI seem like it's chat gbt but like any of these models we don't care so instead what we're going to do and we've talked about this you know extensively in previous um episodes like the one on context where we talked about retrieval and inserting your own truth we want to be responsible for the truth as developers so let's just say as FileMaker developers who want to integrate AI one of the main things you're taking responsibility of is your you are the one who's in control of truth that way when people ask a question you're going to go ask that question of your FileMaker database or maybe documents that are stored in your FileMaker database get the answer then you're going to go to the language model and you're not asking the language model question anymore all you're asking the language model to do is potentially like Analyze This summize summarize talk like a human and and honestly if we if we looked at language models as like imagine if I told you like a new version of FileMaker was coming out and it had this add-on or this like function that could talk like a human like five years ago if I told you that yeah or not even like out loud but like you could take nonsecretors of information like a found set and have it come up with like and make a paragraph on demand if we told people that that was a new feature coming out in like FileMaker 17 people's heads would have exploded yeah that's what this is that's how we should be leveraging what this is so and and the other and the reason why that's so critically important is because um I don't think people as developers we need to take ownership of this information people don't like understand the concept of how this works how they're trained where the truth comes from and that is why there's so much focus on hallucination as too right um Qui anec remember one of the First videos you ever posted on YouTube um yeah you did a really really good chat gbt one I was I was so proud to see that up there yeah oh awesome and one of the things you did in the video and maybe if we think of it you can find the link and put it in the show notes if people want to check it out but you weren't and we we sort of disparaged this idea of zero shotting which means you just and then think you're getting the final response back right that's not how you should be using these language models either but you you said hey was it I don't know if it was a script or a calculation I think it was a calculation I hadn't really done much with scripting at that time and so you said what's this calculation and then it wasn't like perfect so you said hey um no actually it it should be blah blah blah and then you got like a pristine good calculation that you could do something with um yeah I did one on squel codes too it like writes ridiculous ly good ex equal really really good it's equal um and like one of the comments and and I mean this with love in my heart I don't even remember who the commenter was but it was just notable that one of the comments was way to go Matt now you just taught chat GPT not to make that mistake again right so that is where I think people think it's a database where people I thought at the time by the way I thought at the time I was updating it with my yeah right and and now just to be clear you were updating it in that one conversation you were updating it in the token window of that one conversation y but the second you start another conversation or another user starts another conversation or yeah even you in your own account and you start another conversation it has no knowledge of what you said you can store that memory somewhere and then retrieve that in other conversations if you want we can do that as developers but the point is model doesn't change following up on that anecdote I the error that it made in the calculation um like when I do when I do um uh C I only I only ever use case I never use if and so I said don't use if ever just use case and it did it for the next bunch of them and then later on in the same conversation it started using IF again I'm like what did I just tell you you know and so that we exceeded that window right the token window you exceeded the window and at the time it was probably only like 8,000 tokens or something like that and now we have like hundreds of thousands of tokens so it's less of a concern or later in the episode here today we'll even talk about million tokens or two million tokens and and what that means billion tokens but the but the idea is we can affect the knowledge in a model and there's two ways to do that um one of them is through something that's called fine-tuning and I just I'm here to report I've been I've don't rabbit hold on fine tuning like I everything there is still about fine tuning we've tried it my team has done this we luckily we had very patient customers in the early early years of this like you know four years ago and and we tried to change the knowledge in the the base models the amount of data that you need to actually and it it depends on the use case but just give people an idea the amount of data that you would need to actually affect the knowledge inside the model of a copy of the model it's not even the real gp4 it's your own copy of gp4 is more data than I think Mo almost all of the FileMaker databases we're talking about we're talking about like you would need to have like pristine formatted prompts and completions in like hundreds of thousands if not Millions if you're lucky to to to try to affect the the uh the truth inside these models yeah yeah yeah fine tuning is a nothing Burger plus it's expensive High compute uh if you go with aure for example you have to pay 57 cents an hour just to even have it sitting there without anybody even asking of it it's ridiculous this is it is not meant for us it's really not even meant for a lot of Enterprise use cases to be honest Instead This retrieval this rag thing we keep talking about is the way that you just go get the truth just for the one little time someone's asking a question right and then that way we can search our own FileMaker databases or API uh sources or documents that we're storing in our FileMaker databases that's how we manage this stuff so and and and so if you if you read between the lines that means who cares about hallucinations who cares when you ask it a question and it comes back and it's not correct right you give it the right answer so that's why I talk like retrieval augmented Generations like asking it to do an open book exam retrieval augmental generation rag rag is is like is a giving is having the language model do an open book exam the book is you're supplying it with the right answer so that I think that's critically important yeah it has a little bit to do with um it has a little bit to do also with this thing that we'll likely talk about but um this uh Claris AI learning initiative is where we want to take the deficiencies inside of uh the the base model and we want to as a community add so much of our knowledge into it you through a rag process that it will then come back with correct knowledge and just just to set the table Stakes on latent knowledge within uh the application I've seen some people comment that it's like they ask at FileMaker questions and it's 100% wrong um you and I did a very informal uh study with uh uh your your proficiency exam a couple episodes ago where um which I probably want to redo now which it got uh yeah we have to do it on Omni right but it was a yeah I want to do more than that I actually want to like you know make it really open book like upload the upload the FileMaker tech manual okay or make it reference the FileMaker you know the FileMaker website for answers or you know be really specific what you just said right there is exactly what the entire clais AI learning initiative is all about it's as a community can we go I mean obviously we're going to put the help system in there Claire is going to donate their help system uh all their knowledge base but we want people who run forums and people who have their own training data and their own documentation to throw it all in there everyone will get cited and have traffic you know sent back to their own websites and everything but that knowledge will then change the latent intelligence of the model so I can tell you that we uh gp4 got a 78% on your proficiency exam but what I can tell you more scientifically is that back in 2022 I was working with some people or someone at CL to just try to do some like Skunk Works stuff and we actually did what was called semantic scoring where we took like a 100 questions and the correct answer asked the model what it thought the answer was and then did did a semantic comparison to what the correct answer was that ended up coming back with a 0 64 which is like a 64% on on a 100 question exam right so much room for improvement and that's what that initiative is all about so anyways that's how it pertains to femer folks you know before we were talking about it I didn't really think I had that much to contribute but I have hundreds of videos of all the classes I've taught uh literally that I can just give to you you've got tons too yeah and just to give people idea the the 2.5 million tokens worth of data that I'm donating to this open source project and and what we're saying is that people can connect any Source they want to this shared knowledge and it'll all be cited is every video I've ever done every linda.com every LinkedIn learning every Devcon um the Claire's folks are even talking about giving us access to Devcon presentations I mean imagine the knowledge set that we can create like globally all the individual peoples and companies to contribute to that yeah would just make it vastly better and that's doesn't fall into the category of fine-tuning that's a different no yeah what we're doing is creating a a yeah we're going to create a rag source for this um and I mean technically depending on how much data we get we could attempt to fine-tune at some point down the road but rag is really all we need for what we're trying to do here okay so um so anyhow um I at this guy in the car yeah sorry he's this is one of my favorite podcasts and this is a perfect encapsulation of what we're saying and I wanted to give uh David freeberg from the all-in podcast an opportunity to speak exactly to what we're talking about and he gives a really interesting example that I want to have you comment on after he's done talking so let me let me uh see what he has to say about this my big Takeaway on all this is that there's a real deep misunderstanding on what these models really are and what they're doing the Assumption I think naively is that they take in a bunch of data and then they spit that data back out and in some cases may be slightly transformed but the truth is that the models don't actually hold the data that they're trained on they develop a bunch of synthesized predictors that they learn what the prediction could or should be from the data and this is similar to how humans operate can you pull up the article Nick that I posted this article is from last May and you guys may remember this case I can't remember if we talked about it on the show but Ed Sheran was sued by the estate of the writer of the Marvin Gay song Let's Get It On for infringement in his song Thinking Out Loud and he actually prevailed in court where he went through with the jury how he comes up how he runs his creative process how he came up with the song and how he does his work and as you know Ed Sheron and nearly every musician or every artist listens to other people's music looks at other people's art and they synthesize that data they synthesize that knowledge and that experience into their own creative process to Output what in their mind is a novel creation and I think that AI Works in a similar vein in that it's not actually storing the images it's not actually storing the music of third parties it's learning from that process we all like musicians learn on classical music they listen and play other pop artists music and then they learn things that they like things that they don't like things that fit together well certain intonation syncopations and that's how they develop their own tracks as the artist of the podcast speak to that is that your process as well so it's it's really interesting about that and and i' I followed a lot of those stories about the Lost use of songs like that um and listen to the examples of the a a song and the B song which are like oh it's look it's in the same key and it uses the same chord progression and the timing is the same blah blah blah well there are only so many keys you know and if it's a song written on the guitar it's probably going to be an e or G or C it's not going to be an A flat because that's a really hard chord to play on the guitar you have to Barrow it at the you know worth pret um uh and chord progressions are also really limited right just like how many colors are there you know I guess there's Millions but I mean there's basic you know red green blue yellow purple um Indigo not a color they made that one up um because galile really like the number seven even though there's only six let's not get too far off topic I might have ADHD okay so another thing that I've geeked out on a lot and especially living here in Greece Pythagoras figured out a really long time ago 500 and something BC actually turns out the Pythagorean school on the S Island which is uh one of the Greek Islands tucked under close to into turkey is it right over there right right off off your right shoulder that way okay gotcha um yeah I'm actually quite sure of that okay um so when you when you play a guitar string or any kind of a string and you put your finger on right in the middle and you play it again you're going to get an octave so we figured out this overtone sequence that you can get uh and the notes implied in that are the notes of the scale so it goes root octave fifth or like 12th whatever and then uh third and then I can't remember exactly what there was and there's and then the other one that's really interesting is like a dominant seven and then within all that and when you place chords using those notes there's really clear leading tones and really clear places where it wants to go it wants to complete like sort of like the completion algorithm we're talking about yeah also having some advanced music training lately working on Master's greed uh it's very different how this this is interpreted around the world um but to to a Western European classical music whatever even rock music the the pallet that we have is really really similar it's like the same um Greek music Middle Eastern music a lot of African music Indian music totally different like modes and themes and stuff like that they still all use some of the same notes but some of them use scales with more like microtones and things like that all of which is implied in that so I guess I guess what I'm trying to get to is um based on what's and based on what's sort of the structure is you're only going to get so much um kind of variety if you will that's fair uh so that you're going to get some things that are really similar no matter what there's even a really funny video of like a a chord sequence and a guy's playing the exact same sequence and he plays and sings like 15 different songs that use the same one um I I would guess that well so then like if we brought this into like the FileMaker knowledge conversation like we were having a moment ago yeah yeah if so I don't think that story went anywhere sorry about that people well it was about it's about the process so the process is that that like so let's say someone we take all my learn training videos and all your training videos and a person who's brand new to FileMaker watches all those videos because they want to build this very specialized one-of aind FileMaker database that no one has ever built before yep so does my video have the in the implicit instructions on how they should build that no of course it doesn't does do yours have those no of course not nobody's does I do that was coming yeah I know the point is we're it is using our information to train itself to learn the skills it needs to do this and then it goes and applies it to some use case that you and I when we were creating our training content were completely unfamiliar with yep in that way that a human is learning and in like the like the gentleman mentioned in the video clip here ago a musician will go and listen to things and oh I like this and that's a style I like and they'll emulate some of it or whatever kind of iner yeah compi compiling all this stuff together and then generating new stuff in that same way that is what these language models are they are not databases so they're not like hey someone tell me how I need to create this exact script because someone created the exact script before and I just want you to go retrieve that and show it to me no the point is these models could teach someone how to build an entire FileMaker database from start to finish in a database that's never been done before because it's using knowledge that it learned on you know techniques and how functions work and you know logic and how these all things are associated with each other and and then created something so that is data models scripting everything all that knowledge is in there and at a0 64 from a knowledge uh from a you know semantic uh scoring like that's actually a pretty good Baseline for us to start with so we should be excited that there's that much latent knowledge about FileMaker in any one of these models uh that was gpt3 by the way not even 3.5 that was a long time ago so I would imagine four is even more intelligent would score even higher but the point is if we layer our knowledge on top of it and like a human it then learns FileMaker then it can become that guide on the side or if we're talking about practical use of language models integrated into our business applications we just give it the business knowledge we give it those answers and so it even have to figure out what the answer is we're actually you know doing that for him so it that is sort of a soapbox a bit on how these language models are not databases it's really important I believe it's like I'm wondering like the other words you could describe of what they are right synthesizers creators maybe Transformers um I mean it is it is unique literally the literally the T is for Transformer and GPT yeah so that's not a bad but it has more to do with Transformer the the the training technology that is what's new in these language models is is uh their Transformer models right yeah like he said in the video I mean it doesn't it doesn't spit out what has been it actually is creating something kind of new um that that should be our that you just wrapped a bow on it so um that is what these things are and we should accept them as such and people should get really excited about it and and take the responsibility of Truth onto themselves when we're building these business applications and and move from there with their understanding about Ai and what we're talking about so um we also mentioned in this uh little discussion that we just had that um you know I tested it on GPT 3 there's been 3.5 there was four four turbo and oh my God this week like a ton of stuff has been announced so before for Omni because I want to linger on for Omni I just have to like it's shocking in a f as a FileMaker developer the kind of stuff that can happen in a twoe period in our new world of yeah yeah yeah anyway well yeah because it was just a few years ago that we were on an 18month release cycle within within our ecosystem right and then they got really excited that they were going to do Agile releases like three times a year now literally massive huge language you know AI organizations are are doing that at even a faster clip so a couple of highlights before we get into the open AI because we don't want to just say hey AI is open AI there's a bunch of models out there Google has a bunch of models uh you know we've alluded to Long context windows before and we'll probably show some examples and dedicate more discussion on that is that that becomes more pervasive in these models in in a future episode but go at the Google IO event which also happened this week we we'll work backwards in time chronologically Google IO event happened the day after open aai open AI actually moved their announcement right before it which is kind of funny it's like the late night talk show Wars or something but um they announced uh some improvements to their uh Gemini 1.5 pro model which is generally avail ailable and has a 1 million uh token window yeah and so what does that mean let me actually I've got a little graphic here that I'm going to we'll we'll talk to it but for the people that are uh checking us out on um uh what do they call the what do the kids call it um I think it's YouTube yeah there we go YouTube so here's an example most of people are are um are watching on YouTube uh cool but a lot of people listen on the podcast too yeah I listen on the podcast uh when I listen back and so for those people what we're looking at is a chart so Gemini 1.0 Pro uh gp4 turbo has 128,000 tokens uh Claude 3 has 200,000 tokens that's that token window and so just to just to just allude briefly to that when you and I were talking about like oh you you told it to use case instead of if and then later in you know it's it started it forgot that it's probably because you your conversation went past the at the time the 8,000 token um window right and so now it doesn't even have the memory anymore so that's why memory is so important so now Gemini 1.5 they announced a flash model which is actually uh I've tested it out a bit and it solved some problems that 1.5 Pro was unable to do so I think it's interesting and by the way flash was a model that was trained on data that was outputed from Gemini 1.5 Pro so purely synthetic FL flash also hearkens back to really long time ago in your career when you had that FL based chart thing for FileMaker yeah well hey for the kids Oscar the Oscars seated celebrities butts and seats for 13 years using a Flash and FileMaker application that's how important to humanity so looking at this chart right 32,2 128,000 2,2 million that is not Mo's law and then how much time has passed this is like a year less than a year this is this I would say since turbo it's less than a year that's insane so what does that actually mean to people because tokens is a unit of measure that we're not familiar with so that so it almost doesn't matter what the unit of measure is nothing else in technology is on a curve like this no no the curve itself yeah the exponential growth here is Bonkers and and that's why I was just telling uh somebody yesterday when the internet came out everyone was like internet's going to change everything everyone it's it's going to be absolutely be pervasive and it was right but it took like a decade why did it take a decade for and a lot of people are sitting there thinking this is the same thing we're talking about with AI here's why these are massively different scenarios first of all with the for internet adoption to occur literally cable companies the like the second least efficient organizations on the planet next to like the DMV and the government cable organizations were in charge of literally physically laying cable across our huge vast continent in the US at least to be able to connect people to the internet that took like five to eight years for us to get to like like Broadband uh acceptance so you know like there was 128k 256k modems whatever like really really and we needed Broadband to do the really cool Web 2.0 stuff right so arguably that took years and years and years not AI sorry folks the there are no not only is there no bottleneck but like if you go listen to anything Nvidia is doing and they obviously were the greatest you know biggest stock increase in the history of the market it's because they are the version the ai's version of the cable companies but they're an accelerator they are a force multiplier for for this growth and they have more yeah their new chip that they came out with the new chip they came out with uh is so cost effective that it actually costs less than the cables of the computers of the that of that replace that connect them and and it it not only removes the bottleneck for growth and AI adoption but it like ex it helps Excel this process and then when we look at the the models themselves and their capabilities if we just use this token thing like you said like look at the jump it it was a 9,000 per growth before the 2 million token one 9,000 per leading up to C uh Claude 3.0 since uh 2020 to 2023 now Gemini 1.5 um uh Pro and Flash have up to 2 million tokens 1 million tokens are generally available 2 million tokens you can weit list and that actually equates to instead of inputting text that means you can input a 2-hour video 22 hours of audio which is like one of our podcasts MH 60k lines of code or 1.4 million words of text into one API call and it will be able to evaluate those and I will tell you I don't want to add a bunch of Imagine an API call with that much data but yeah I'm do I well we're doing it we're doing a bunch of video evaluations where we're actually saying what's going on in this video find certain elements describe it you know make observations blah blah blah you would you would reference it right you wouldn't actually like upload a benex version of the video would you of two a Bas 64 version of the video really but you're that big you wouldn't okay I would think you like point to like you know a bucket on you know well yeah no you yeah you point it to like anv location or something like that and it consumes it yeah yeah that part is more practical than sending it an API call absolutely but the point is when we talk about video it's not just a transcription it could be but you could say make observations about what you're seeing like we're doing one next month that has cameras that four different individuals that are all engaging in the same incident they're doing and from all their different perspectives and doing that is one analysis just to kind of let you know what what kind of stuff is possible here right yeah so so anyways that's Google announced that I would say that's probably the most notable thing uh the other thing that came out and I don't want to belittle it there's there's a lot going on there the uh other things uh that came out were uh Microsoft uh had their build event and just at a high level they announced like basically everything in um like their entire operating system is like all AI co-pilot is just integrated you can just rightclick contextually anywhere long story short it's awesome super cool stuff and they essentially have like a desktop version of co-pilot that is exactly what the oh oh and the one one thing that Google did that was really cool ask your photos and which means you could just say like hey what's my license plate number and it would like know oh this is probably your car because you have a lot of pictures of it here's a picture of you standing next to it right and oh here's one time that you caught the license plate on it in the background of some photo where you were standing next to your car by the way we can do that as FileMaker developers uh by uh embedding images and using language models uh to describe not even embedding them use language models to describe with the images embed that description and then search against those embeddings we can do the same thing in our FileMaker applications so I just wanted to note that Microsoft build event uh the thing that I think was most there's a huge people should check it out but the thing that I thought was so cool is this thing called recall I'm think most people are terrified of it it literally tracks you know like um like time machine in apple like you can revert back to a certain time yep but it tracks every single thing you're doing on your machine and then you can use that as a retrieval source for you know questions or whatever by the way I think we're going to see something similar to that from Apple uh as we talk about it so that leads us up to going backwards in this to the um open AI gp4 o which Matt stands for what Omni like gp4 like in the world gp40 is like 4.0 but there already is a GPT 4.0 so they were trying to be kitchy and so Omni is really referring to that it's a multimodal meaning that its inputs can be video and audio and uh text and and and output the same as well too so they're really deliberately saying like there used to be this there's this like approach where you could do like a mix of experts where you say you tell me what your input's going to be and I'll send you to the video input one or I'll send you to the audio input one now they've trained this model so that it does all these things not only that but it's cheaper 25% cheaper I think 50% faster it's huge deal so that means anybody that's already integrated uh open AI models into their FileMaker Solutions it just got faster and cheaper like compared to Ford which was only a few months ago right it's insane exactly I was just one other the thing these videos at open open.com are absolutely worth watching to see the use cases of it it's crazy yeah they did a really good job um they and most of what they were doing was about the uh IOS app that's coming soon and then the desktop app which I think you're going to show in just a second yes I am and um and there'll be an upgraded IOS app because they there is one already yeah the the IOS app the yeah the IOS app even the desktop app right now you have to be implicitly invited to be able to get this screen broadcast feature that's called the new voice like thing where you can kind of interrupt it and a lot of the videos some of which were're just showing screenshots of on screen uh were examples of them doing it like this one here um who is this guy here with my uh let's have him introduce himself let's talk about him for a second actually okay so uh for YouTube folks who are we looking at Matt it's the guy Khan from from uh from Star Trek yeah no different KH different KH Sal Salman KH or Sal Con is actually kind of a luminary in the AI space and and really Computing and training too yeah well in education you know dependent of it and so um he's from he started KH Academy he's got a great story um and everyone should be aware of the work that he's doing he was one of the early insiders when GPT 4 came out they gave him early access and he built his whole K Con Academy um K Amigo uh is is like his co-pilot you know thing that he integrated into it and it really was had a profound I mean any we can all do this and and by the way the K Amigo that Khan Academy is doing is what we're proposing uh open sourcing and giving out for free to the entire FileMaker Community worldwide as part of this cla's AI initiative oh so he's taking up all the his videos and stuff like that and putting it into got exct and so you can you can talk to them and not just answer questions but it like the the way that the structure of these models is built is that um it it uh you can say hey if a kid asks too many questions or they seem like they're not answering it or they might even have an intonation that they're frustrated pause go back to some of the basics give them a quiz try to reel them back in right and so that was just like a textual conversation they could have the big thing that was introduced and all um um just kind of scrub this is like hypoten let's just play a quick little close which side do you think is the hypotenuse um the hypot I'm not totally sure I think I think it might be this one but I really am not sure the side AC M you're close actually side AC is called the adjacent side to the okay so what was actually happening on screen is instead of having this chat experience like con ego or what we're hoping the you know Claris assistant will do now you can actually talk to it and so the the the Talking part the input of voice and the output of voice while not new but the new version of it is really what the big announcement in GPT 40 was and so the app that we're seeing like uh the delay like I used to talked it before it's like 5 Seconds 3 four five seconds now it's two or 300 milliseconds the same as just about the same as a human talking the other part is too you can interrupt it which which sounds rude but that's actually natural part of of talking is like not like like yeah yeah I got it I got it I thought we'd probably rant on that but yeah doesn't matter so so this is a big deal um so well first of all regarding uh s um uh I want to really recommend this book just came out this is another thing that happened this week is the brave new worlds from not Brave New World which correct a dystopian novel from yeah this is New World from hopefully not St opian but it's uh subtitle is uh how AI will revolutionize education and why that's a good thing and and honestly I strongly share this same vision of Education moving forward and to me what that means is a guide on the side for training purposes instead of um Sage on the stage it would means that's like when you and I are sitting up in front of a room and we're just like lecturing to people and spewing our knowledge out the guide on the side is a tailored uh a tutor that knows exactly where I'm at in my Learning Journey knows ex you know knows how to spoon feed me stuff at my level can encourage me to keep moving forward on that Journey but is available 24 hours a day and is completely dedicated to my learning process that is you know in essence what's covered in this book I strongly recommend checking it checking it out or listening to it I checked on Audible it's not available yet it doesn't seem like it just came it was a my pre-order came like two days ago so it's just rolling out yeah sometimes AUD like a couple days later whatever it's a big enough book it's for sure going to be there yes agreed and so so let's actually do you want to show people uh for the video uh for our YouTube folks of what the desktop experience looks like and why this might be relevant to FileMaker folks let's take a quick Gander onto your desktop off know we gotta always make sure we keep this relevant to FileMaker yeah so why How could somebody use this for FileMaker now first of all just to set the table here this version of the desktop app that you're talking about does not have the screen broadcast observation part I just want to be clear about that what it has is the ability to take a snapshot we'll show that in a second but that but all we're saying is the super cool thing that just watches your screen as you're moving around and makes observations that's yet to come they're rolling that out slowly but anyway so the same principle applies here go ahead Matt explain to people I just take an example script so here's a script that I wrote that I used in the classes I teach um that's a really really simple way to call S grid to send an email I found some other stuff and I try to boil it down to like really really simple and so like if you have problems with it because apis can sometimes be tricky how do I you know look at this so we we put it into the desktop app which is and let's talk about how you put it into the desktop app as well too so first off we'll show it in a second too right so um so yeah I clicked on a little a little um uh friendly um Microsoft character of the yeah I don't make sure you see that there it is yeah yeah in the bottom left corner you can up upload a file or you can take a screenshot and then if you choose a screenshot you choose the application and then it will take a screenshot of that thing and that's what you did to include the script that's what I did and we had a little bit of a a lag to do it but then what it does is it reads the whole script and it says and it gives me a bunch of advice of how I can refactor it and make it shorter and just to be clear we we didn't even give it any instruction you just hit send and this is what it thought maybe it should do and I didn't use it within my FileMaker pre-trained model or anything this was like this is a one shot example it's maybe a negative one because we tell literally didn't even tell it what you were hoping it would do this is just what it thought it should do so I tell you what I really wish though is I really wish this GNA be live video watching all the time as I'm working and then well that's what so that's what the new version is going to be it's got screen broadcast ability so literally you're coding and and then you just go hey and then you can talk to it hey what I'm I'm I get I'm getting this error what does this error mean here and you don't even have to explain the a it knows what you're talking about contextually so um and it'll look something like this listening and of scary who smarter chrisite or Albert Einstein no no no no I don't want to talk about Carl I want to talk about whether chrisop is smart of in Albert Einstein this is not plann error yeah clearly so the uh by the way um I don't know I don't know what voice yeah it was just it it just that was one of its awkward things but uh compute uh one quick thing I just want to show that's kind of funny is uh I'll tell you what voice you're not using oh um oh I guess I won't be showing my funny thing um so I guess I I can tell you what voice you're not using and that is uh this voice um so uh one of the updates that came inadvertently is that Scarlett Johansson who was was unofficially the voice of Sky I mean honestly man it was it's Scarlett Johansson and the and the reason it was Scarlet Johansson is because of a great Spike Jones movie that came out in like 2017 called her which I just rewatched the day before the open AI thing just to kind of you know get into the Zone she was the voice of the AI and really if if you rewatch that movie it you can do all that all the stuff that's in the movie now which was supposed to be futuristic is totally capable now and the announcements in GPT 40 were a huge part of that so unfortunately they sort of tongue and cheek made one of the voices sound like Scarlet Johansson that was the one I would listen to all the time I find her voice to be very pleasing and um she is very upset about that and so they took it out and now Sky oh so it's not there something else yeah so what are the voice choices now I think there's four when I looked at it earlier yeah they they they what what I would like to see Breeze Cove or Ember yeah so um I I would actually like to see them uh frankly I would like to see them be able to like use our own uh cloned models in there as well so like you can make it sound like I can make mine sound like you you can make your sound like me whatever uh quick thing I want to point out too that's kind of an interesting thing is if you go to this uh hello GPT 40 page to see what they announced they have this thing here that they didn't even cover in their presentation that's a exploration of capabilities and you can click down and see a bunch of interesting stuff like um you know uh oh wow write poems my favorite thing uh 3D object synthesis like you know they they gave it a flat object and it turned it into a 3D like you talked about Flash earlier this was like flash in the 90s big thing was like like 3D spinning logos uh so you can do that you can create these uh 3D outputs as well too but the thing that I thought was so interesting is this one lecture Summers in this example they uploaded a 4minute video of a lecture and asked it to summarize it which is table stakes from AI integration it's a great use case yeah we do this we do this for every single standup of every single Zoom call that we do we do the same thing and we actually have it identified jur tickets and updates and who's assigned to what and what the blockers are and dates and all that kind of stuff so it's really like fathom for that for a long time they actually just changed their model but great call if you use fathom for uh for zoom it did this for every single call every single thing it transcribed the whole thing and then also gave you summary and searchability yeah and you have control over what type of summaries you want so that's where your prompting comes into play now here's the thing that I want why I want to bring this up not that that that capability but the fact that this is a 45 minute long video that means it's 750,000 tokens of an input so we just saw that GPT 4 was um I don't even remember like you know 128,000 tokens so I what I think I'm seeing here is that potentially the token window in 3p gpt3 or GPT 40 Omni has secretly also been expanded and so too like I would suggest that people um I would suggest that people maybe tried their own experiments where they uh upload videos and before no one's even said you could upload videos before uh into GP chat GPT so I would actually uh try doing it I actually uh uploaded our previous podcast in here moments ago so and now we're showing uh the uh the FileMaker refractor refactored script you brought something up earlier that you thought was kind of interesting it introduced a best practice that you thought was worth mentioning oh yeah yeah so I had a a thought like we we started out this whole thing with you said that llms are not databases but I'm asking chat GPT I have a question are llms databases let's see what it says no language models like me are not databases here's a brief explanation of the differences language models okay so oh that's turn out you're right unless it is lying about itself I don't know probably just looked this up from some storage thing it's not ironic hallucination um too bad we could have heard Scarlett Johansson say that instead but those days are behind us I bet there's way so so we've talked about now they're not databases they're language models I think that's really important for our the people who watch and listen we also talked about all of the stuff that just came out the last week so you and I do podcasts every two weeks I think we should say that um there'll be a bonus episode you're heading off to Berlin I'm not wrapping up we have one more topic but you're heading to Berlin and um um and you'll be interviewing some folks and and they'll get to be on the bonus uh episode then you and I will be back at the uh first week the end of the first week of June and we we can't reveal but we're gon to have a special guest who's going to talk about another huge announcement that had come out in this time period so we talked about language autel announcements we're going to we're going to bring things back relevant into this space just a few days after we record that episode that I'm talking about there'll be another announcement that's coming out what announcement is that gonna be man I'm gonna let you spill it I'm not gonna oh I'm sorry well I was just trying to do witty banter June 10th uh will be WWDC so one of the things that you're going to see in the show notes by the way is this article called predicting Apple's AI play and I just want to say and this is something that I had logged and written over time I've been taking notes on this for a couple years now actually um we're not going to get into the minu on it we would encourage that people read this but uh a couple of the things highlights that people should know Apple has acquired uh as many as 32 AI startups in 2023 alone which tops the acquisition lists of tech companies globally big big ones you can read a little bit about what some of these are most of these are um models that can run on on the edge or on a device and then like there was some rumors about Ajax which is their own GPT and then those rumors quickly went away and then we started hearing about how they're negotiating with open Ai and even Google to be able to leverage their models so I I don't know if that's such great news if that ends up being the case because yeah they basically said we couldn't get ours or didn't want to or whatever I think it's mostly their privacy policy we talked before about the privacy policy being kind of a problem for them because they can't yeah they can't train uh on on your data because your data is not in the cloud it's only in your device unless you could train on the device so some of these other things we won't get really into them seems like that's a big theme they have they have dropped a ton of open- source models as well too and papers like one of them is this feret one that they drop back in October and a whole bunch of others a lot of these are for image manipulation so I part of the prediction is we're going to see a lot of stuff on the device that you can change images and um and and and and frankly what and there's also this mlx platform which a lot of people don't even know but in December they released a machine learning uh mlx platform that's inspired by pytorch for example and um and it's available you can use it you can go out and build on top of it if you want to all the rest of them are all about modeling on a very small low compute device so I think in general what we're going to see and and I encourage people to read this like there is a ton going on in here they're leveraging flash memory uh there were some like little codes about how they were using open AI to summarize stuff for Siri Siri has sted for a long time yeah some leaked code it's really has sucked for a long time and and apple hasn't even changed a thing and that usually tells us a big change is coming right they're not just satisfied that Siri is now the worst possible conversational AI on the planet I think Siri's get really good I think Siri because of the the um the the uh W garden and access to the the information on the device I believe you're going to be able to do things like um hey send a message I don't know whatever see which one of my friends are available on Saturday night and then also check to see whether or not we can go to our favorite restaurant or whatever like that kind of stuff that's based on chats we had and knowled way more basic stuff than that you know like I want to be able to ask you know where's a homegood store near me and have it and I'm in cre and not have it tell me one that's in you know Turkey where I can't get you from here or something you know yeah and it does that right now it does it a lot like it's really really bad um the other thing I think that we're going to see much like we saw at the Microsoft build event is that um on the operating system level we're going to see stuff that's running locally and that only has access to local data that's going to make those uh you know make the experiences more Rich also I feel pretty strongly that xcode which which is like one of their few development platforms is going to have co-pilot like um you know assistance inside there prediction but yeah yeah that's that one's super easy and um and even ior because uh in February 2024 uh Apple reserve the domain IW work. so we could expect that maybe they'll they will touch all of those products uh one product that will not have co-pilot that is technically owned by Apple whether Apple's wear that or not are is the platform that will not have any co-pilot assistance at all and that is why we as a community just to hit this point one more time want to take it upon ourselves to create our own co-pilot because Claris will not do that for us um they are going to participate don't get me wrong they're not resisting they just really it's just not something that's on their road map but it will be in I think all the Apple products will have this as well so anyways you can go to um medium.com medium.com it'll the link is in the show notes you can also go to isolutions ai.com and click on articles and this will be the top one that pops up WWDC is going to be on uh June 10th and it's at 10: a.m. Pacific you can watch it online the link is also in this article if you want to check it out and you can also see what my my predictions are uh for what's going to happen I think that that's probably something that we'll revisit on a future episode so yeah we we did the prediction episode and we're going to talk about what actually happened episode exactly and so uh so again uh every two weeks we're coming out with one but we'll have a bonus episode of what you going to Berlin you want to talk a little bit about why you're going to be in Berlin there's a uh there's a finmaker conference called FMP this is the first time I will have gone to it um there's several conferences in Europe uh most of them are in English so hey people in America get your butts over here get your passport come to Europe are they streamed are any of them streamed or are they all yeah I think they're all recorded and I think you can pay I'm not exactly sure what the specs are for the FM one but the sessions are recorded and you could uh you pay a fee to watch them later um and if you go to the conference you can I think watching them is included so that one's in that one's next month then in October there's Rome FileMaker week go to that one just to be in Rome because Rome's totally amazing and it's also a great conference I'm given a talk there on Jason because I love Jason a lot uh what about Json as well would you ever think about talking about that um sure if you want to call it that by I did a bunch of stupid research and found that the guy who invented it called it Jason he calls it Jason and I so I switched I've been calling it Json all these years but yeah um and then November and I can't remember Malmo I think Sweden is the yeah yeah another amazing City Europe's great uh is the engage you European conference so that one's I think I think they're all around 100 people something like that a little more a I think clis is very supportive I think CL shows up and is present at them they Ryan mccan is actually going to go out to the Asia pack one I just saw on LinkedIn which is interesting uh I've been to the Asia pack devc con a couple of times they've had me come and present there which is a real honor for me uh I won't be able to make it this year uh have some conflicts with the professional football schedule this year that's um uh you know it's just how it works prohibiting me from travel time of year uh but the interesting thing is you're going to be talking to some people at is it the Berlin Conference that's where you're going to be doing interviews and putting that up on the next podcast all the conferences I go to I'll do interviews yeah awesome just like I'll try to do video this time yeah yeah yeah Mage I just did Audio Only Just recording for my phone and people are like Hey where's the video I'm like oh yeah I should probably do that people seem to love the video uh and then as a reminder we uh oh actually I will say that um uh we're recording this on the Friday May 24th a little Peak behind the curtain and next Thursday May uh 29th 28 I don't know 30th May 30th something like yeah is uh I'm doing a webinar on basic prompting you can sign up for that for free go to claire.com it's uh or the clarish community uh basic prompting principles I'm going to be you know if there's anybody that's interested in starting from scratch and you know getting some tips I'm going to actually be focusing specifically on how to to bring your own data into the prompt and how you can things interesting things like transform it these will be the same techniques you can use whether you're doing API calls or working with things like jbt I want to talk about that a lot more as we podcast give more practical tips and examples and things like that talk about real world cases of things you guys do it's fasc oh absolutely we're gonna we'll definitely get that our our uh you know I did an engage session that had like 25 different examples of like real world stuff that are we've done for our clients I I've almost doubled that in my new version of the presentation so I'd love to talk about some of those maybe even show a couple of those things especially when it comes to the long context window stuff and the multimodal stuff you can do just amazing like really incredible things that you can you can do with that stuff so um that's what's coming soon and then and then on the uh Friday the the on the week of the fourth we're going to have a special guest talking about something very special and uh definitely look forward to having that conversation and and uh you know please go back and listen listen to some of the previous episodes they're not all uh you know topical date wise that we've talked about some of the basic concepts of prompting and context and a lot of things we've alluded to here uh so please uh and uh like And subscribe right you did it yeah I beat you to the punch on this one so right I I talk about it by how I don't want to say it I know exactly well only set point to us like pound that like button whatever it is yeah exactly like they're all clever like they're The Brady Bunch or something you know um but uh but anyways really appreciate everybody uh listening tuning in we Endeavor to keep this going every couple of weeks and have all sorts of really important and topical things related to uh Claris and AI so Matt my friend this is my favorite way to start a day and I would assume your one of your least favorite ways to end one it's a very good way for me to end a day and now I get to go to dinner at some lovely restaurant exactly so um thank you for spending time with me and thanks everybody for listening to us and checking us out we promis to keep this rolling so um from Chris and Matt uh thanks so much for tuning in to Claris AI Claris talk ai ai all right thanks everyone