Create and Monitor LLM Summarization Apps using OpenAI and WhyLabs

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello I think we should be live now as always uh if you're already watching please let me know in the chat that you can hear me I always like to double check and we'll wait just a minute before we get going so feel free to say hello in the chat it's always fun to hear from other people watching maybe put where you're watching from and if you want to share a little bit more about what you're working on like if you're building something with a large language model that's always really cool to hear about as well me just share my screen and get set up here and I already see quite a few people maybe sometimes the number fluctuates when I first start the Stream yeah if anyone wants to say hello in the chat uh let me know that you can hear me that'd be great and then we're going to wait just a minute here before we actually get started while we're doing that I'm just going to make sure that everything else seems to be working on my end it looks like I see I'm Live on YouTube and Linkedin just going to make sure the email went out for everyone on Eventbrite yeah if you're just joining in uh we're going to get started here in a minute but feel free to please introduce yourself in the chat and maybe tell me where you're watching from I'm streaming from Seattle Washington and bonus points uh I'm actually excited to travel to New York for the first time next week for some reason I've never never been to New York so if you happen to have any recommendations for from uh for New York uh ping me or let me know as well in the chat someone said hello from Poland awesome hello what time is it over in Poland it's got to be pretty late or is it not that late actually but welcome it's fun doing virtual events in getting people from everywhere someone said hello from Italy awesome I would love to come visit Italy sometime soon I have not been over there yet I haven't been to Poland yet either Italy is uh well I don't know if it'll be the next place I go to but it's it's up on my list it's 700 p.m. in Poland okay yeah that's not too bad well thank you for joining me in your evening time and again we'll get going here um in just a minute it looks like people are still joining in feel free to say hello in the chat maybe share where you're watching from if you want to add a little bit more details about yourself um I'd always love hearing it especially if you want to share about what you're working on around machine learning or llms maybe you're just learning um maybe this will be your first app with open AI that could be kind of fun as well yeah if you want to share a little bit about about what you've been working on and or maybe what you want to build uh do so in the chat someone said I'm always welcoming Warsaw as well it's a great place I would love to go um travel way more and I'm hoping to do that in the next coming years all right so um we'll wait just a few more seconds we'll get going at 10:04 my time and we'll start going through some slides um so just to heads up this is going to be some slides and then we're going to go into a collab notebook run some code you're more than welcome to follow along with me I definitely encourage it um usually you'll have more fun probably and learn a lot from doing that but also if you don't uh want to run all the code or you don't have an open AI account or something right now uh you're welcome to just watch as well while we're waiting to actually start going through these slides I'm going to share some links if you're watching on YouTube these links are also in the description below so if you're coming back and watching the recording uh you can find them in the description below or if you're watching right now and you want to open these up they are also um in that description below but I'm going to copy them into the chat real quick as well um and then we'll get going through the slides and then there won't be that long on slides and we'll go kind of see some Hands-On examples here so one um you'll want to have a wabs account if you want to follow along the whole thing where we're going to uh build a summarization app we're going to evaluate some metrics with an open source Library called linkit and then we're going to see how we can write those uh metrics to an AI observ Observatory uh which we're going to use ylabs in this case and Link kit to extract those metrics um the account is free there is no card or anything required on sign up so it should just take a minute I think you just have to verify your email and then you should be good to go um the other account that I don't have a link here for is if you have an open AI account we're going to be using the open AI API today um so you just want to have that so you can grab an API key later um when we run our model um you can check out linkit I shared the GitHub link as well and we always appreciate a star on our open source work so if you go check it out definitely give us a star and then we're going to run code in that collab notebook that I just shared and I'll share it again when we get to the end of the slides um and then also I will share a slack group this slack group is going to be a good place to ask questions later I won't see the questions you post in there during the workshop but if you have questions about anything we covered today or about your project later about ml modering in general or just want to uh keep up with AI resources it's a fun Community to join someone said I need it I think that was to me saying I I want to travel more and again I see more people joined in so we're going to get going through the slides um and then we'll there's just a little bit of time on slides to catch everyone up with some of the concepts and talk about what we're going to be doing and then we will go into code and run code and see things kind of Hands-On um and again you should follow along if you want or you can always watch as well but this is a workshop on creating and monitoring summarization apps so we're going to be using the open aai API like I mentioned before and a open source Library called link kit that we maintain at ylabs and then we're going to use the ylabs AI Observatory to look at metrics and we'll see kind of what this all means um some of the theoretical stuff on slides and then we'll go in and implement this in a collab notebook so you know no matter your skill level you should be able to take the The Notebook I'm going to give you and run the code and hopefully have a lot of fun going through this workshop and again maybe you've never built a summarization app let me know in the chat if you have had experience building summarization apps it's always fun to to hear about some of the experience of people watching as well all right oh and real quick sorry I got to share the links one more time over in the LinkedIn stream because it didn't post over there uh just the ones that you need to follow along I'll share it the WBS account and then the collab notebook and the slack Channel and then we really should be good to go here think it all right so today uh a quick introduction about myself first uh I'm say jelly I'm a machine learning and mlops evangelist at wabs at wabs we build tools to enable AI observability and ml monitoring um and we'll see a little bit about this in a second but typically that's to you know discover issues faster or um have better ways to evaluate your models either in production or kind of before you deploy them especially with large language models like we'll see today but basically we're on a mission to make AI better um and for over the last decade I've worked as a software engineer or Hardware engineer U mostly in with agencies um or startups in Seattle Washington and then also in Central Florida and in general I love making things with technology if you want to stay connected with me uh definitely do so either in that slack Channel or LinkedIn is going to be a really good place as well so you can ask me questions there later or if you just want to talk about um anything else random in Tech or or uh Seattle coffee shops or give me recommendations for New York places when I go visit there next week uh you can ping me on LinkedIn about you so again in the chat feel free to say hello it's way more fun uh not only for me presenting but I think other people in the chat too to kind of see who is watching here um so again say hello maybe say where you're watching from if you want to share a little bit more about yourself uh please share like what you're building or what you're interested in building with large language models what kind of stuff do you want to build um even if you're not building it yet like what is interesting to you in the field of large language models that's always really uh fun to hear and also if you want to throw in there what type of models you would want to see in another Workshop So today we're going to going to be building a summarization app where it's going to take in you know a chunk of text and then summarize it more concisely and give us a shorter response what other type of applications would you want to see a workshop on next so that could be like chat Bots or building an agent um doing like a a meeting summarization you know there's so many interesting use cases here uh Raymond said good morning missed the book club yesterday uh we actually had to cancel it yesterday I had to cancel it on my end so you actually didn't miss anything um and then again I'll share this link so if you want to do an introduction uh that is a little bit more permanent uh join the slack channel here and all right so setup again um I'll cover this when we get to the code part and I just saw a few more people join so again these links on YouTube are in the description below um and the only thing you'll probably need to do is create that free yabs account again no card or anything required I can't did I click this um it's already in in the chat and in the description below and then also an open AI account um that I'll share this link you may already have an account there of some type probably we're going to be using an API so you will need um access to an API key from openai I feel like my cursor is being a little weird I'm going to switch the keyboard all right so um building a summarization app with open Ai and this is kind of what we're going to do today I made this little uh graphic here um so we're going to use the open AI API like I mentioned it's going to take a user prompt in this case the application is going to be for summarizing text so we'll see some examples but I also encourage you to just you know find an example that you want to play with and see what happens with it today I think it's more fun if you kind of bring in your own data and uh we're going to input that to the API we're also going to do a pretty simple well it's going to start out simple anyway system prompt to tell open AI what we want to do so in this case we're going to tell it to summarize text you can get really specific with it and we'll see this in action in a little bit and we can te uh collect Telemetry from it kind of see how it's performing on various metrics so there's different things you could look for on summarization apps and again we'll see this in action but it could be things like you want to make sure that you're not over a certain character count because maybe you want your summarizations to always be incredibly short maybe you want the readability score to be really high so your summarization maybe should be readable for almost any any audience uh maybe it's taken in hard to read text uh maybe really technical stuff and you kind of want to bring it down and have it in um in a language that's easier for more people to understand and these are all metrics that we can kind of track out of the box and again we'll see this in action and then we can observe those metrics over time and actually see like if we change a system prompt or if you know a completely new type of user comes in and starts using you know new data um how is our model uh behaving with that new data or that new system prompt we can monitor those over time and make sure that when our model is actually doing what it's supposed to do over time but also we can improve it over time so right now a lot of people um use system prompts to change the behavior of their LMS like with open AI uh you couldn't even fine-tune on 3.5 until recently I forget if you can do on four right now so you're telling it hey you know summarize this textt in this case and then you're probably going to want to change that system prompt over time to make it better as you realize like how people are using your app when I talk to a lot of people right now I say cool you're changing it how do you know it's better um a lot of times it's kind of like a shoulder shrug where they're like well you know I added more details to it I think it's better they probably test it on one or two examples they input in themselves uh but you don't really see how it's performing in production over time and that's what we're going to see today so you could change the system prompt you can actually do multiple API calls and have like five different system prompts and monitor all those at the same time and then compare them to each other and then select the best prompt over time which is pretty cool and I think that's going to be kind of the future of prompt engineering um in my opinion I mean we'll see where this goes but you're going to be making these changes monitoring over time but also potentially having multiple prompts out there as kind of Shadow deployment where only one is really giving the the user response back but you can kind of see how the other prompts would be performing without actually passing user uh data to the user so just AI observability at a high level uh looks something like this again we'll see this in action you'd have your pipeline today it's just our open AI model but if you're actually putting something in production you probably have a whole mlops or data pipeline where you're doing your surveying and retraining Etc in that pipeline you're going to be extracting some sort of telemetry and this case uh it's from The Prompt and response for a large language model the way we work at ylabs and with link kit um the data extracted here is privacy preserving so it doesn't actually contain your raw data it just creates summary statistics and again we'll see this in action but it it is things like sentiment or reading level Etc that gets extracted U with linkit those are some out of the box metrics but you can also easily add custom metrics in here so this is on your environment in your python environment you're extracting these metrics no rotted data leaves that environment then you're going to be passing just those extracted metrics so no raw data you can pass that to an observability platform in this case we'll be using ybs and then so you don't have to worry about any raw data moving out of your production environment this is key you know for a lot of Industries like this but really key for specific Industries like healthcare or the financial sector where they cannot or they won't move raw data out of their um on Prem environments but they're okay moving summary statistics out so if it was um tabular data you could think of those statistics as like the Min the max the mean and again we'll see this what this looks like in action for um language metric and then inside the observability platform you can do things like you know look at all your data um you can get reports and dashboards you could set up alerts and notifications so something drastically changes you know set an alert uh or trigger a workflow so you can actually have like annotators annotate new data and then retrain a model kind of all automatically happen when it detects something like data drift and today we're just going to be kind of looking at more evaluation metrics um but actually tomorrow I'll share a link in a little bit but tomorrow we're doing another workshop on focused on more security metrics and actually setting things up like guard rails and uh alerts on more um like security focused metrics So today we're going to mostly be looking at kind of evaluation metrics here so what is ml monitoring and AI observability so we saw it at a high level here um it's pretty common to monitor model inputs and model outputs so today that would actually just be The Prompt coming into your model and the response out of your model uh but you could have a bigger pipeline where you may maybe doing data cleaning earlier on um I'm not sure if that's a a super popular use case with prompts in this case but you could be adding monitoring basically in anywhere in your pipeline for data quality or schema issues uh data drift happens a lot where the data going into your model no longer really matches the distribution of the training data set is a common issue um so for large language models it actually could be like if if you're always expecting a certain group of users to talk a certain way and then now you have a new generation and they have different slang words and your model isn't understanding as well uh you could potentially be monitoring like new data to a corpus of uh pre-existing data and say oh there's like new words and stuff in here people are U using the model differently and we have a a saying bad data happens to good models you know you can evaluate all you want before you put in production once you put it in production chances are you're going to see uh some interesting things happen and interesting data that comes into your model um but you can also monitor not just for data quality issues or data drift things but evaluation metrics and that's what we're going going to be looking at today we're going to be looking at some language metrics and we're actually just going to use that to um improve our model potentially instead of just monitoring for things like data drift um these are all things you can do and should do and again like I was mentioning tomorrow we're doing another Workshop more focused on security metrics for your models and we'll kind of get a taste of what that might look like today because we'll see some of those metrics um but we'll be focused in more on how we can monitor our summarization app today and potentially improve it so tying this back to large language models you're probably already familiar with them at this point you probably use at least chat GPT if not other models uh we're going to be looking at summar summarization specifically today but some of the other use cases that you could use similar things like we're going to use today is agents chatbots and Q&A Etc and again if you have a type of model or that you want to see a workshop on next uh please let me know in the chat and I saw someone said they're watching from the Netherlands and they're doing conversational chat agents in specified domains medical for us but we also use it for clustering and Radiology reports o that sounds really cool and interesting uh thank you for sharing yeah it's awesome seeing you know where I mean just machine learning in general is being used uh but I think the healthcare sector and and is you know super super interesting uh so some of the common pain points with LMS uh you have hallucinations where it's giving irrelevant or inaccurate responses uh prompt engineering this is what we're going to see more of today where you know you're telling your model how to behave um and it's hard to kind of track these changes over time and it is currently right now probably the main driver of most LM behavior um that you're using and then we're going to be looking at how we could also potentially look at um output validation again we'll see this more in the workshop tomorrow U but we'll see a little bit of it today too we extracting metrics and you could look at responses and with those responses you could do things like is there pii in here or is my model behaving you know like does it have a really negative sentiment maybe I want to tweak it um so how to solve these at scale we can set up guard rails so extractum metrics like we'll see here in a second we could say you know if they meet a certain threshold like if there's a a big probability that this looks like a jailbreak and know we could you know not pass that prompt to the model or pass the prompt to the model but not get the response back to the user we can make sure that responses and prompts coming into our model belong in the category that the model was created for so if we're building a summarization app today and people are trying to ask it for medical advice or something like that uh maybe we don't want our model to do that or if you have like a tax chatbot and people are asking it for a weird stuff uh maybe you don't want to uh do that and then again security is a big thing that comes up where people want to make sure that uh no pii or any information is is uh coming out of models and a story I always tell is I was building a little chatbot for a like online store and pretending to be an angry customer and then the model gave back a phone number but I never told it to give a phone number so it just made up a phone number um and I actually found that because I had General observability set up so it wasn't a guardrail that I had in place at the time but it said hey a pattern matched a phone number and I was like oh I that's weird I never told it to give a phone number and then I found up this madeup phone number that my model was giving um so using just kind of General observability where I can look at all these stats around my llm both on the user side and on the response side I was able to then go set up a guard rail so the observability side is really good to kind of keep an eye on and then kind of allows you to get information of how you should adjust your evaluation uh metrics and your your guard rails so solving that scale um link kit looks something like this you have your prompts coming in to really any large language model um linkit uh really easy to install in any python environment we have our prompts or response we're going to extract metrics from the prompts and responses you could also just do one if you wanted to with that you can do things like uh discern quality and sentiment again we'll see this in action very soon here um and you can enforce things for like response quality pii leakage um and evaluation like we'll be looking at today and linkit is open source it has Integrations but it most of the time you probably won't even need an official integration it's really easy to integrate with almost any model or large language model that I've used as long as you can get the prompt and response in like a dictionary format or a data frame you can extract metrics with link kit for it and it's extendable so we'll see out of the box metrics here but you can add your own custom metrics really easy just import a couple things from our open source library and really just make a decorator on a function and then you can start extracting metrics out whatever that uh function returns and I'll I'll link to some resources for that later as well but out of the box we have metrics like response relevancy um has patterns so has patterns by default it looks for things like credit card numbers phone numbers Social Security number I feel like I might be missing one um but some common kind of pii type of information that you might not want coming out of your model or maybe even going into your model uh that's a big deal too security with um information going in sometimes and then sentiment toxicity jailbreak similarity um categories difficult words reading score and more and again we'll see all of these in action here in just a few minutes but link it works like this prompt comes in we extract met metrics out of it response comes in and then we extract metrics out of it as well and then we can do things like evaluate our model performance monitor over time or even set G guard rails up on these specific metrics um again we'll actually run code here in just a few minutes but just to show an example it's really easy to use you just import uh linkit uh or L metrics from linkit and then y logs as Y which y logs is our other open source library that linkit is built on so um if you're using other data that isn't language based like tabular data or computer vision you can use y logs to log metrics and then we're going to initialize a language schema so by default it's going to be out of the all those out of the box metrics that we just saw and then this line here in the middle this is actually us just using link with Y logs we're passing in a prompt and response that's in a dictionary format with prompt and response as the keys passing in the language schema here and then it's going to create a profile that has all these metrics in it and we can use those metrics to monitor over time so this is a little screenshot from yabs and in yabs it'll also automatically detect um interesting things going on that will give you kind of some insights about the data that was uploaded and again we'll see this in action here very soon uh but you're able to log this data you know in this time series View and then you can also set up monitors so if something does go wrong like a a big shift happens um you can get an alert on it or you can just use this to kind of eyeball and evaluate your models so enough slides let's go ahead and see all these things we talked about in action so again if you want to follow along you can just create this account again it should just take a minute there's no card or anything required I think it's just email um validation and then this is going to be the collab notebook that we're going to be writing and again they are also in the description below so if you're watching the recording later you can go click on that below and then also uh you don't need this for the workshop but if you want to get a promo code for the Enterprise version of ylabs which has a lot of other neat features like monitoring on smaller batches of data um so you can get alerts hourly versus daily which is the default again we'll see this in action here um in a second you can fill out this form again not required for the workshop here um but if you want more Enterprise features like adding team members um adding more projects and having U smaller batches to run monitoring on you can uh fill out that form below once you open up the collab notebook you should have something like this and I would love it if one person in the chat can let me know that they were able to open this up because it's my first time running this workshop and occasion usally there's some sort of you know sharing issue and you won't be able to open it but it should be good I double checked it but I want a triple check it by asking in the chat uh can you open up this this collab notebook someone said it works awesome thank you so much for letting me know um I have some other links in here too so if you want to dive deeper into the models and stuff that extract all these metrics feel free to click these links um again the project is open source so you can actually go look at all the models and stuff behind the scenes and even adjust and add your own and uh I didn't update this part this is actually a lie we're going to be using the open AI model but we still don't have to run with a GPU time so I'm actually going to delete this right now before I forget next time all right so uh what you should do is I recommend going to file save a copy and drive this is going to create your very own copy of this um so don't worry about messing it up in fact I definitely encourage you to mess around with any of the code or specifically like the prompts coming in or get your own text coming in so we're going to be summarizing text so if you want to think about what you want to summarize that could be like your favorite movie plot you can go find on Wikipedia or IMDb put it in and kind of see what happens um this is going to be set up so it's interactive and you can actually take your own data pretty easily and see what happens here all right so save a copy and drive um you can go bookmark the original link if you want so again don't worry about breaking this you can always go back to the original link um for part of this here shortly you're going to want to have that free wabs account as well as the open AI account to get our API keys and if you want to check out linkit and Y logs links are here as well as the slack channel for asking questions later um this is a code cell so if you haven't used Jupiter notebook before or Google collab uh Google collab is basically Google's way of hosting a jupyter notebook and a jupyter notebook allows us to write code um documentation and see visualizations kind of all in one place it's a really great tool um if we hit run on this first code cell it'll take a few seconds to spin up a little instance here and uh we see hello world where we run this first code cell and you can also run code cells by hitting shift enter on it and you'll probably see me doing that a lot on the workshop here so let's go ahead and run um this code cell which is going to pip install the libraries we're going to be using today open AI gradient blank kit um and by default Google collab comes pre-installed with a lot of libraries like tensorflow piech Etc but we're going to install a couple they don't have today um getting a couple pip airs here so we'll see what happens uh sometimes collabs a little weird installing stuff I think someone even mentioned they're getting pip airs um let me see what happens when we're done here and occasionally uh gradio behaves weird so gradio is a neat little tool that allows us to create kind of a interface to interact with language models or really machine learning in general very quickly we're going to see it in with uh LMS today sometimes collab can be a little weird with uh installing some packages because it has other packages pre-installed and sometimes there's like conf conflicting things and uh especially with gradio gradio always works great when I use it locally sometimes it's a little bit weird in Google collab but don't worry if we do have an error I already have a backup plan for it and there's actually a code cell towards the end of The Notebook where we can skip the Cod the gradio part and actually just run this other cell and it's not quite as cool but it still totally works so I saw we saw got a couple errors there um Let's ignore them and see if everything works so if we can import everything here it probably will work um occasionally like if there's an error on importing gradio that should be that's the only thing I've seen recently that could happen so we're importing open AI OS datetime radio uh y logs and Link kit uh this is I think a new feature in Google collab where probably because we're using gradio and it what it does is it creates a little widget inside our notebook that we can interact with um this is a new thing so I'm actually at saying we can enable our thirdparty widgets I'm going to ignore this for now we might have to come back to this if uh if radio doesn't work this might be a new way to always make a work I'm going to ignore it and let's see what happens so everything imported and then we're just setting our Panda data frame to display all the columns again we'll see here uh why in a second so now I want to set up my authentication and project keys so I'm going to go to my open AI account and I'll create a new key here so I'll just call this so some app Workshop you call it whatever you want again uh this is the part where you do need an openi open AI account if you want to actually follow along with everything right now otherwise feel free to continue watching and we'll kind of see what we're doing and you can always come back to this uh notebook when you create an account so I'm going to create a key here I'll copy this over go back to my copy of the notebook paste in my open a key here now I'm going to get some data from yabs here so if you created a free WBS account and you're on the the homepage of the app you should have something that looks a little bit like this you'll have different projects in here I think that starts with two toy projects when you first sign up um what I'm going to do is hit create resource so you should be able to do this as well this will take us to this model and data set management page I'm over the amount of free models I can have here uh on a free account so I'm going to go ahead and hit edit models and data set and I'm going to delete one here and I don't think you'll have to delete one if you have a new account but if you do need to free up a model um you can do so by hitting edit and then delete I'm going to create a new model now I'll call this a sum app Workshop as well again you can call it whatever you want resource type here I'm going to select a large language model this is just going to kind of change the way that we view our data in ylabs and give us some specific features around large language models again the default batch frequency is every day so we we'll see what this looks like we're going to be uploading profiles and then those will be uh kind of batched into daily batches but if you have the Enterprise version you could do hourly as well and now I have my new model it's this one model 304 your account your model number will probably be uh I think it's like two or something like that U but I've created quite a few models on this account so now I'm going to copy that number over model 304 set that up as my uh model ID and make sure you don't accidentally copy over any new lines or spaces here so I just want it to be a string I'm going to go get my yabs um API key now so again back in yabs going to uh this tab here from the same page I'm going to create a new API key again I'll call this SU app Workshop misspelled that create token I'm going to grab my token here and then I'll paste it in this little placeholder again make sure you don't have any um new line characters or spaces that you accidentally get and then actually our or ID is um at the bottom or at the end here uh but you can also always get it on in ylabs over here you can copy uh the or ID here or here as well um I'm going to copy it in now and place it in this orc ID spot here and actually I'd have to double check because I know we are updating y logs where you might be able to just set the API key without this top part um so that actually might work if you just run this without this now but we're going to run it this way just to make sure so I'm going to run this code cell and now I've set up my open AI key as well as my ylabs project um the next cell we're going to Define our uh summarization app how it works we're going to we're going to be using GPT 3.5 turbo but I'm going to pause for a second if anyone needs me to go over anything about getting those access tokens from wabs again I'll just go through it really quickly again but feel free to please ask questions if you uh didn't grab something you can hit this create resource button from the homepage you can always get to the homepage by clicking yabs here you can also always get to that data and Model Management Page by opening up the hamburger menu going to settings going to model and data set management and then we're here we would uh uh um hit type in a new model here except for I already am at my Max models if you need delete a model you can hit edit actually I can delete this one for now delete that and then I would type in here and then I get my model ID that's what we want to copy over to our notebook in that model ID placeholder section from the same page we can go to access tokens and then create a new API key here we'll get that in that blue box and then you also want to get that order ID and copy that over as well was anyone able to get all their API keys I know it's kind of a lot to go through like getting the open the ey one all the other ones if you didn't already have the account I'll pause again for a second to get a drink of water but also let people grab those keys if they want to get them somebody said worked awesome glad you were able to follow along and uh yeah I'm I'm I don't know I I'm always curious like how many people watching if you already have an open AI account um I'm guessing quite a few people if you've been experimenting around with LMS you probably already have open AI API account um hopefully you do all right so moving on let's go ahead and create a function we're going to call it summarize text now it's going to take a user prompt so this is going to be the text that we want to summarize and it's also going to take a system prompt by default um it's an optional uh variable that we can or parameter that we can pass through so by default it's none but then we can pass it in and these going to be these are going to be uh Specific Instructions to tell the elm what to do basically with the user prompt and if you haven't used open AI API yet this is kind of what it looks like if you're using GPT 3.5 turbo there's a couple different models you could be using from open AI so for example uh GPD 4 uh if you have access to it and then also they have some other completion model endpoints you can use but I've actually found that just the GPT kind of chat 3.5 does a pretty good job if not better than the other endpoints besides four and then you just give it a specific user prompt um or system prompt and that this is going to affect um how the model behaves and then the user prompt is what we're going to be um doing that behavior on so this this will be the text that we're going to be summarizing and then we're going to create a dictionary here called prompt and response and we're going to return both the prompt and the response out of the function when we call it um and this is makes it really easy for us to log the data with linkit like we saw an example of uh in the slides but also if I wasn't even using linkit uh maybe I want to store The Prompt and response somewhere so I could return this out and actually save that Json or something to an S3 bucket again it might depend on how you're a set up but you might want to be saving these prompts and responses as you know key value pairs over time uh so you can go back and look at your data or train a new model with this data Etc so let's go ahead and test this out now actually I need to run that code cell Let's test it out we're going to pass in um these instructions so I encourage you right now to play around with this so right now you can see I say summarize this text use only three bullet points explain like I'm five uh uh wordings uh you could adjust as you know wording um so you know it's going to summarize whatever I pass in hopefully it's going to put in three bullet points and then it's going to try to kind of explain like I'm five so hopefully uh you know use language that most people could understand what's going on here now I'm passing in this really long string here which in this case this is the abstraction for the attention uh is all you need paper which is what Transformers are built on which most GPT models are but again I encourage you one play around with these instructions and then also if you want to get different data and just paste it into this string instead um so again a fun thing is you can maybe get your favorite movie plot or something like that paste it in and see how it summarizes it so let's run this and then here we're going to pass in that test prompt and the instructions that we defined above and then we're just going to print the response that came out so remember we're we're returning the prompt and response as a dictionary so we're just going to print out the response um that came in so we won't see this big chunk of text again so we're passing in this big chunk of text and then it came out and said the Transformer is a new type of network architecture for sequence transduction models it uses attention mechanisms instead of recurrent or convolutional neural networks which again you know maybe I can improve this as like it's super short so maybe it can't do a better job um explaining like M5 these are still a lot of pretty complex words if I didn't know what a CNN was um I might be lost here the Transformer model performs better is more paralyzable and requires less training time than exist existing models on uh machine translation tests so this did a pretty good job of summarizing it into three bullet points what if we said like four bullet points so again I I encourage you to mess around with this prompt this is what's uh adjusting our behavior of our LM so let's see what it does with four should give us a fourth bullet point and it does it's kind of giving us more information there um we could again um what what's what is something fun that you think would um be good to adjust how this is behaving someone says does ylabs work with open source LM models yep it should work with basically any large language model uh all you're doing is getting the prompts and responses today we're showing open AI but we've done a lot of workshops and have a lot of material on hugging face models as well in fact if you scroll to the end of this notebook uh there's a link L to an example for doing this with hugging face but as long as you get prompts and responses in a dictionary or a data frame format uh you can do all the stuff that we're seeing here today someone said I am not good with my current open AI key uh you might need to go get another one maybe you deleted uh your open a key before yeah and you could also again Google different models to use here the 3.5 turbo is great for a lot of use cases and it's so cheap I forget that exact price but it's like you know fractions of a penny uh for each call here so um if anyone has a fun idea of what we could ask this to do let me know in the chat uh they said it said you exceed exceeded your current quota interesting uh you maybe you have a uh I know you can set like a Max spend on your a open AI account maybe you've exceeded that or U maybe you had a I don't know if you had free tokens before or like a free amount that you could use I know I used to like a long time ago it gave me like $20 or something um maybe you need to add in more credits all right so let's uh play around with this just a tiny bit more um let's say I don't know make it actually let's look at metrics and then play around with this more so all right I'll leave this is so we can kind of see what's going on you can adjust that prompt above this is going to drastically change what is coming out of your large language model if you want to do summarization probably always have something in here like summarize this text or you know we are going to be summarizing this text here's more specific instructions so this is cool let's look at some uh metrics we could potentially use to evaluate what's coming out of here um and we're again we're going to be using link kit here so some metrics that are important um again let me know what you think there could be a whole bunch there's not one silver bullet for any specific LM task you know that I know of yet uh definitely a lot of stuff we're working on is finding better metrics for that that for all these different tasks um but some of the out of the box metrics like we saw we had like word length or sentence length different readability scores and then you can add custom metrics and I'll link to some ones you might want to look at specifically for summarization cases like Bert score is a popular one we're uh again we have a blog post about it but you can kind of compare like how relevant is the prompt versus respon response so even though we lost a lot of text in there um is it still basically saying the same thing there are some really interesting metrics that you can use um that you could bring in as a custom metric and get a really really potentially valuable feedback for your LM models but let's look at these out of the box ones first so we're going to import LM metrics from linkit and Y logs as y we're going to initialize a language metric schema here the first time we run this um it can take a second to run I think I'm I might have already run this before um or it might have run when we imported it the first time uh but you it might take a few seconds as it's initializing some models behind the scenes and then here let's go ahead and run this and I'll describe what's happening all right so now we have this data frame here um I'm going to scroll up what we're doing here is creating a variable called language metrics or Lang metrics we're calling it y. log we're passing in the summary which is our whole dictionary here that had the prompt and response and we're passing in schema which is our language metric schema which is going to contain those out of the boox metrics by default but again you could add custom metrics initialize a schema again and then have things like Bert score Etc that you want to add that aren't out of the box in in linkit and then we're going to look at this in a view format and look at that view format in pandas so now we have this data frame in our notebook and we can kind of inspect what's going on we have I think it's something like 15 metrics for each prompt and response and then also one on prompt and response relevancy so we can see here we have the first one is prompt and then let's just look at this real quick as prompt aggregate reading level we can see cardinality estimate is one because we only logged one thing right now but if we logged a whole bunch of prompts we'd have a cardinality estimate which is basically how many variations we've had um most of your interactions you know might be different uh probably would be different in real world use cases so cardinality would go up then we can scroll over and we can see our readability score for that specific uh metric that we put in is 14 um I think this is out of 100 I'd have to double check there's a whole bunch of met metrics for me to keep track of here um and this is probably saying like it's not as readable because we had a lot of complex words that we came in from our abstraction and we can see that the max the mean the median the Min they're all the same number here again because we just logged one version of a prompt and response as we log more together we'd have a a different distribution of these where you know the max would be the highest number that was in that back patch um the Min would be the lowest number so these numbers um are all going to look the same right now in Max mean Etc just because we have one that we're lcking right now but we'll see what this this looks like as we log more over time here in a second and we can also see we have a readability index character count again that might be good for summarization difficult words again it depends on our application so maybe you know we don't care if it's very readable we expect our audience to be academic in the field whatever they're summarizing and maybe that's not really something we want to pay attention to but potentially in your application it could be um or maybe the character count is so maybe you always want it to be like 150 characters and if it went out of bounds of that you could set um an alert or just kind of observe this over time which again we'll see here in a second uh flesh reading ease so saying kind of how easy is the text to read coming out of in this case we're looking at prompts but we also have all these metrics on responses so you'll see prompt dot this thing if you scroll down you'll see response do this metric and in this case we're probably going to be observing the metric the response metrics if we want to evaluate how our model is performing but also um logging all these together like we'll see here in a minute um over time gives us a good kind of understanding of how people are also using our LM um so you might learn that oh people are putting in really complex stuff or people are putting in really simple stuff or really toxic prompts or the sentiments always low it gives you a really good kind of uh understanding of how users are using your data even if that's not really the thing you're specifically optimizing for right now and then you can also calculate you know jailbreak similarity so are people trying to make your model do something that it shouldn't be doing are they trying to extract data out of it and again that's more of a security use case which we will dive more into tomorrow before I forget actually I will share the link to that so tomorrow we'll be um doing kind of a similar Workshop but with different apps and we'll actually be talking about some of the top 10 um security cases from from oasp and how we can monitor these metrics for that so today we're going to be mostly looking at these responses and kind of evaluating our model um but there's a lot you can do for evaluating the prompts coming in and setting things up like guard rails to make sure your model is behaving how it should be behaving okay so we have kind of a quick way of seeing our metrics um let's go ahead and make this a little bit more exciting we've set up our API keys so we can actually write these metrics to ylabs and more easily visualize them so let's import uh gradio sgr or I think we might have done that before but we'll import it again and then let's create a another little function here um and again I encourage you to mess around with this text it will actually be able to see what this looks like over time as we change it because we're going to write a Time series view in ylabs where we can see these changes over time um what we're doing is U taking in the message just like before uh days to subtract so you can backfill data in ylabs and there's going to be a little dragging bar that you're able to uh drag around to subtract dates so zero is going to be writing today and then you can go back six days so you could write like a week ago and then set up kind of what these things look like over a week uh we're getting our prompt and response from our summarized text function we're going to profile it with Y logs like we've seen before um and then the only thing different we're doing is calling this Telemetry agent from wabs and then that's going to um set the profile date to a different Tim stamp if we change it so by default when we write it to yabs it's the date and time that we write it but we can overwrite that date time which is what we're doing here if we decide to kind of backfill our data and then we're going to write that profile up to yabs and then we're also going to print out the response in gradio so run this and then this is actually the little gradio app piece where we're creating an interface and actually if you just run this you'll we'll kind of see what all that is doing let me know if this works for you too so um again gradio inside collab could be a little finicky but I've uh it it seems like it's been doing pretty good lately and it's my first time really relying on it for a workshop uh so if somebody in the chat wants to let me know that this worked on there end too that would be great if not like I mentioned I kind of have a backup code cell here um which isn't quite as cool looking but it kind of does the same stuff on the back end here but we can see that we have this little message window that we can put in text and then we have this kind of drag thing here and I'm going to copy this text below so it's the same text we had before but again I encourage you to go grab your own text and potentially uh mess around with this prompt where we're telling our model what to do so the current prompt is summarize this text use only three bullet points short sentences explain like M5 wording and make easy easy to read for anyone so let's go ahead and paste this in and I'm going to drag this to six days so what this is going to do is basically write data um a week ago so if I hit into ylab so you'll see what this looks like in a minute and I hit submit and if all goes well we'll get a response from our open AI model sometimes open AI can take a little bit to get a response back from to as well like I think that took four seconds which isn't too bad but sometimes I've had it take a little bit longer so here we have our output now and it's actually really cool you've kind of built a almost a whole little app at this point where you actually have an interface to put in text and then you get a response back here um that's why gradio is really cool and Russ said it worked awesome thank you for letting me know um all right so we got response back let's actually go ahead and just run this again on the same day um again you could do a different day if you want I'm just going to show an example on my end here um and I definitely encourage you again to get your own text to summarize and see what's going on and mess with that prompt so we ran it twice now on the same day let's do one more time and and we'll see the responses are actually a little bit different every single time right at least they should be usually they are so I'm going to go to ylabs now we've already set up our API key so it's writing data to it we have our suap this is the one I just made I'm going to click into it we have a little dashboard summary dashboard here this gets more interesting the more data that gets uploaded we can see we have like 33 metrics that we're tracking um you'll get some information around um like if there's potential security issues or daily volume coming in we have a dashboard tab here um this is kind of segmented by security metrics and performance metrics again we'll focus more on this if you come to the workshop tomorrow but we can already see that there's some data coming in here for prompt sentiment um monitor man manager we'll come back to this later but we can easily set up a monitor to trigger when our data fluctuates fluctuates a lot or goes over a certain threshold whatever we can Define you know what our monitor is we if we go to the inside Explorer tab um we are now kind of getting some insights that's gained from our profile that's uploaded um if that isn't showing uh click this little show Insight button and then it should show up here so it's saying the prop prompt sentiment is a score of 97 um indicating overall positive sentiment and our reading score is 55 of 100 so this might be a little difficult to read when we were looking at the response it was talking about convolutional neural networks Etc kind of these uh complex words and stuff that's probably contributing to what could be a low score again maybe uh not maybe 55 is actually pretty good for our use case like if we're expecting people to be technical maybe that's a totally fine reading score um but by default is saying that could be difficult for people to understand and it's saying the uh response is a a sentiment of 67 so again still overall positive so this will kind of look at all those metrics that we saw before those 33 metrics coming in and if any of those scores look like they're really high or really low when they shouldn't be or potentially it'll kind of show you some insights here which is really cool a quick way to kind of understand what is happening we've only passed in um two prompts here and two responses so the more data you add the more things will show up here this is how I found when my model had the phone number that I wasn't expecting because said there was a pattern found a phone number came out and I was like what and then I went back and looked at my prompt and responses uh because I saved them and was able to see that it did provide a phone number which was wild to me so let's go over to this last tab Telemetry Explorer this is where we can see all of our metrics that we saw in that data frame but if we click into one I'm going to go ahead and click into this flesh reading ease um we can see this isn't super exciting yet it'll look cooler here when we upload more data on different days we can see that now our data is uploaded um on this day the 19th so about a week ago because I overwrote six days and we can see we have the men the man uh the Min the max the median for our reading score here um as well as some quantiles and so you can kind of see that yellow upper and um lower bound and then the median is right there and that's because we wrote this um three times and the response came back slightly different every time so there's actually three different values in here the reading score is really high for one and a little bit lower for sure on one of the other ones and so that would be maybe something we want to fix in our prompt to always make sure the reading score is going to be super high so you could actually kind of use this almost as like if this was the metric you wanted to really optimize for you can almost use it as a fitting function for all your prompts and then monitor those prompts over time and change your system prompt to make it in this case like more readable um or you might be looking at like character count you always want it to be shorter or something like that um so now we're gaining some insight about how our model is behaving let's go back here and what I'm going to do is um I'll write a couple maybe I'll do the three again just for a few more days here so I'm going to drag that bar to five so now it's going to overwrite or write data five days ago here and you can also just backfill this data like if you had a whole bunch of data um from production you want to kind of go ahead and put in ylabs you could Loop through and just subtract days on it and be a little bit easier than um running this every time and like dragging this bar but I think this is more fun and a little bit more interactive potentially hopefully somebody is like taking their own data uh like a movie summarization again I think it's a fun one and seeing what the model says so again I'll just do it three times for five and if I go back to ylabs I'll refresh here and now we have the next day so we're we're starting to get this like d data filled in over time right so we're kind of simulating our model being in production for S days right now um so again you could go drag this bar and I'm just going to do this really quickly so let's do um couple more days here get a full little time series View and um what I'm going to do next is change that prompt and kind of see how it affects these metrics over time so again you could go ahead and go up and change this instructions which is you know what's causing our model to behave the certain way right now and see how it's affecting your metrics um or past a new summarization uh or text that you want to summarize so right now I'm just kind of repeating the same text uh just to make it kind of quick for this Workshop but you go through summarize a whole bunch of different movies summarize a whole bunch of different papers see what the model's doing let me do this one more day here then we'll go look at it in ylabs again also any questions so far uh does this seem interesting like something you'd want to integrate in with a model that either you know in your valuation period or a model in production so I'll refresh ylabs again and we can see that um it's kind of interesting you know we didn't change our prompt here but we can already see kind of there's quite a variation of the response readability score again maybe this isn't the thing we actually want to optimize for app um but it could be one of those metrics or this could be again a custom metric too so if you wanted to bring in any other metric like blue or Rouge uh Bert score you could be monitoring them over time here and uh kind of figuring out how your models behaving so it's kind of interesting right like we didn't change our system prompt here at all uh we're getting some flu fluctuation for sure in just like the readability score that's coming out of it and that's kind of to be be expected a little bit uh where you know LMS are always kind of making up a response um and there's going to be fluctuation because it's not going to be the same thing um over time do one more here kind of see if it goes up again in this last data point and does anyone have an app they're working on and we could potentially talk about maybe metrics for it as well but again you can see upper lower bounds for these and then let's go in back to our our model here real quick I'm going to hit stop this is going to stop the gradio app and let's change this prompt what do you think would be something that would like drastically change it just so we can kind of um see what it would look like again it's okay if for this example it could be contrived um but you know you're probably going to be more kind of slightly tweaking these prompts and making sure that that it's uh making your app better over time in this case I'll say uh summarize this text and um let's see I'll say let's summarize this text use 10 sentences that'll probably definitely change the the um the amount of text that that's going to be coming in and or sorry 10 bullet points is what I meant to say um and I'll say robust sentences again I'm just trying to make this a little bit uh different so we can see what these metrics would look like and uh I'll say I'll um make it com make it more complex sounding again usually you probably wouldn't be trying to make your thing worse here uh you'd be improving over time but just kind of for this example let's see what happens when we tell to summarize a text use 10 bullet points R rope bu sentences and make it more um complex sounding so again I'll copy over the same text though put it in and I'll just go ahead and say uh day zero here which would be today's date actually I'll do I think one I don't think I wrote it on that one yet so keep the time series going here hit submit and let's see what gets turn somebody asked can you give General tips to prevent hallucinations we actually just had a really good blog post on this that I'll share from one of our data scientists and I would recommend checking this out uh some really good tips of what you might want to be looking for and like how to kind of monitor um some hallucinations and mitigate them so here's Bert score one of the ones I was mentioning earlier you could kind of see um you know is the text still similar enough especially it's a really good one I think from what I've seen for summarization use cases it seems to be a popular one people are using um yeah there's quite a bit fit in here you can check out it's really good blog post to kind of uh uh go deeper into preventing hallucinations all right so going back to our weird uh prompt that we did or system prompt where now it's 10 bullet points uh this should be much harder to read Etc let's kind of see how this affected our metrics over time so go back to yabs a refresh see what a a readability score it's definitely not very high there I'm actually going to go ahead and run this a few times so we get the the upper and lower bounds as well as the median so I'll do it three times is anyone trying this with uh any other fun kind of data you're putting in whether it's movie recommendations like I said or something else other paper summarizations meeting notes potentially to now refresh here I'll look at our data again so we can see that um definitely you know the reading in this case that flesh reading score value um is you know pretty consistent consistently low now whereas before at least had you know some higher bounds and the median was quite a bit higher for most of these we could also look at what would be a good one probably response sentence count it's probably going to go way up there at the end right this is pretty consistent it's always it was always a three bullet points we said short sentences we didn't say a specific amount um but we can see pretty consistent and then this last day it uh drastically changed um so again that could come from potentially in a real world use case your users interacting with your app a different way as well so you could be looking for correlations between this and someone trying to like jailbreak your app even if you like if you're always like oh it's always going to give three sentences back and someone found a way to give it more Etc um or maybe you changed your system prompt trying to make it better and you didn't really realized that you got rid of the part that always made it short and that was a big feature of your app and now this kind of drastically changed um so you can look at this chart and kind of understand behavior of um what's going on in your model and then uh from the responses and the prompts if you don't want to eyeball these charts all the time um this might not work super well on all these features stay because we just kind of put in a couple examples here uh but you can set up a monitor as well so what is this on sentence count we could set up a monitor um there's some presets here that you could click on I'm going to go ahead and hit new custom monitor I'm going to use the UI Builder you can also go in the Json configuration Json configuration has a lot of flexibility so if you can kind of think of a metric that you have in here and any way to monitor you could probably set it up in the Json configuration if the UI Builder doesn't have the thing you're looking for um in this case let's just look at data drift um we're going to say non-discrete values so this is like more of a a rolling number like 0 to 31 Etc we're going to go ahead and apply this to all columns again for our data set that we uh prompts and responses that we saw today this is probably going to be really noisy alerts for all all of those columns but you can manually select one so if we wanted to look at just character count or something we could go ahead and do that um I'll make this a little bit more sensitive um so it's only going to trigger alert if there's a pretty big amount of data drift and we have different drift Al algorithms you can use hoger distance is a pretty good for most categorical and um non-discrete values so if you don't know the specific one you need recommend just sticking with that let's hit next um we're going to use a 7day rolling window so it's going to be looking at data in the seven days and if any of those drastically change hopefully trigger an alert for us here we can also compare to reference profile this is really good if you have a training data set and you want to kind of compare new data coming in and maybe compare it to your training data set or a set of golden prompts and responses and if it goes um way off the rails there you know set send an alert or you can use a specific date range to compare against so I'll hit next um you can set the alert severity this is just for you to so you know if it's like a high alert or a low avert alert depending which metric you're setting this up on and set different actions to email you um slack a Channel or send a message in a slack Channel or you can set up an integration with pager duty to tie back to your mlfs pipeline and actually do some sort of workflow action so we'll go ahead and hit save here let's go back to Telemetry Explorer and let's look specifically at the um uh sentence count again where was that oh was last one and let's see what this alert might look like again it's going to be noisy I think on a lot of our other data um but on this we can specifically see sentence count was pretty cons consistent for those last 5 days or six days and then on this last day here it went way up so we actually have an alert that gets triggered and now we you know wouldn't have to worry about monitoring this like every day checking this chart we can actually just have these alerts happen when uh one of the things we're looking for activates someone said their version broke um does that mean something in the notebook potentially maybe gradio or I've a couple times I've had it where like U open open AI API call like just kind of hung up on me and never returned something not sure if that's what happened to you so I'm going to go ahead and stop that sell uh we saw how we could build a summarization app again you can mess around I definitely encourage you messing around with the instructions going into it um if you want to do something it's not a summarization app you could actually take what we have today change this instruction you could say like translate this text or whatever to a different language and then uh run this gradio app and it would do it as well so it's a kind of a hopefully a cool little template for you get started building different apps with LMS and open AI uh this codell again if you're coming back on or watching this recording and gradio for some reason is failing you could come back and run this code cell and uh it'll actually you don't have to do this right now but it'll just give you a little text box like text box like this you can copy in the text you want to summ and then this is the days so you could say zero that would write up today and then that'll do the same thing above again not as cool not in an interface like we had with the gradi app think it's taking a second there with the open AI again we got our our result printed out so again not nearly as kind of fun in my opinion as this little gradio app that we have going on um all right so that's kind of it for the material that we had planed today um but there's resources here for you to keep learning and again I'll ask the question what type of large language model application would you want to see a workshop on next you could be pretty specific or maybe more vague like if you just want to see how you would build a chat bot with open Ai and monitor it or you could be more specific on the type of model like if you want to look at a hugging face model or Google's Palm or is it a chatbot for a specific application like um I don't know for some reason the one that always comes to mind is like a tax chat bot I don't know why I keep thinking of that but here are some great links for you to continue learning um also if you want to get a certification for this Workshop fill out this form here and the hallucination blog post definitely re recommend checking this out I I know I already shared it later earlier but I'm going to share it again and adding custom metrics this is a pretty common use case again we have all those outof the boox metrics which are a great starting point but you might have um other metrics you want to bring in like BT score Etc another good example for custom metric actually might be like we were talking about doing a translation app so if you were saying hey you know translate this word to Korean um you could actually do like a language protection model and make sure that the response is in the language you were expecting that might be a really good metric for for monitoring a translation application and someone asked about uh using this outside of open AI so here is an example with hugging face that I'll share as well so again you can use Link kit and ylabs basically with um any model here as long as you get The Prompt and response you can extract metrics and we didn't talk about it today um tomorrow we will but you can use these different metrics also locally in your environment so now you have this you can put this in like a dictionary format and run evaluation checks on it so you use that as a guardrail saying you know again we get this jailbreak similarity score if that reaches a certain level we might count that as a jailbreak and not return the response from our model back to the user or maybe not even pass The Prompt into our model usually I think what I've seen most of the time is they will pass that prompt into the model but then make sure that that response does not go to the um uh to the user so they can see how the model behaved soone said how does the public URL work when uh deploying to spaces someone said also said should the certification email to us yeah so we fill out the form and and then you'll get an email uh this week or next week right now we manually create them so it can take a little bit of time so it'll be a few days or probably early next week that you'll get it so the public URL um I assume you're talking about the gradio space here I think when this is running I know that at least me as I'm authorized I actually don't know if I share this if people can access it from anywhere you might be able to but I think this is a feature of kind of gradio where um they make it really easy to make these little apps and hugging face maintains gradio now so you'll see a lot of apps on hugging face basically just built in a gradio interface and hugging face if you don't know they make like the transformer library but they also ho um have a site where you can host models and data sets on and you can host basically models in this gradio interface and then get an actual public URL that will stay live over there this one um collab has like pretty stringent run times so if you're on the free tier I forget what it is but it'll probably cut off even if you can share this I'm actually not sure I'm guessing you can share it out to other people but I'm not you might have to be authorized with your Google account um but it wouldn't be something you want to be running here all the time and then rely on that link being live to share to people like collab is um really not made for that it's just made for kind of temporary interactive Computing and they probably like it and it'll probably cut off you know in an hour or something like that if you're using the free tier any other questions or feedback again like does this seem pretty interesting was this fun to go build a summarization app and look at some metrics that you could use to potentially improve it over time or how to kind of monitor it and if you have the notebook again all these links are accessible later as well so so was interesting and fun that's what I aim for so great great to hear that hopefully it's fun um you hopefully also learn some new stuff Raymond said yes awesome well if there's no other pressing questions on linkit or yabs I am going to wrap up the stream and hopefully I will talk to some of you later again um I don't think I have the the link down here but I'll share the link to the slack chat again if you want to ask questions in there later that's going to be a good place to do it there's other people in there besides me as well and if you want to do an introduction there uh I'll say hello there as well all right I'm going to wrap up the stream thank you everyone for attending thank you so much for everyone who asked really good questions or even just told me yes the thing worked when I asked uh it means a lot to me when I'm presenting uh to get a little bit of feedback from people uh so thank you so much and I will talk to you later if you have any questions about anything we talked about connect with me on LinkedIn or slack and I'll do my best to answer them have a good day everyone
Info
Channel: WhyLabs
Views: 625
Rating: undefined out of 5
Keywords:
Id: 1f3rAPgiNhY
Channel Id: undefined
Length: 75min 30sec (4530 seconds)
Published: Thu Oct 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.