Is Open Webui The Ultimate Ollama Frontend Choice?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi I'm Matt one of the unique aspects ofama is its dual nature as both an end user tool and a developer tool with an API endpoint this allows developers to program against it in fact the GitHub repo homepage lists numerous Community Integrations including desktop and web tools over 30 or 40 or even 50 entries with so many options it can be overwhelming to choose the right one for your needs to help with this decision-making process I started creating a chart and document on my blog at techn evangelist. that highlights the differences between these various tools however as I began exploring the first few Integrations I discovered that many of them had significant issues it felt like I was just highlighting the bad stuff rather than trying to be helpful and so I decided to create a video series that looks at each of these tools my intention is to provide helpful insights into each app highlighting what makes them compelling and point out what might be missing from their offerings from my perspective perhaps this will help you find the perfect tool for your needs or even inspire you to build your own application that combines the best features of multiple tools let's take a closer look at some of these desktop and web apps and see what they have to offer it's important to remember that while I was a founding member of the AMA team I am no longer working for Ama so these are simply my opinions and not reflective of anything that anyone in the AMA team believes okay so let's look at one of these tools with so many it's hard to know where to start so I decided to choose the application that I think is among the favorites for most people that are working with AMA it's the application I see most often mentioned in the olama Discord which by the way you can find at discord.gg olama and it's called called open web UI it used to go by something like AMA web UI but I think the author wants to be less tie to aama specifically there are a lot of things to really like about this tool it's visually very attractive to look at it has a few themes so you can tweak it to be the way you prefer whether it's light mode or dark mode and I wish dark mode would die because it hurts my eyes it gives me good suggestions about some things I could try although I wish those suggestions would change occasionally and also has a very opinionated way of working with prompts which is interesting I initially thought this was a little distracting but I think I came around to to love it well let's take a look so when you first bring up the tool you're asked to log in I hate it when tools do this if something's running locally don't make me sign in if you want to provide this layer as an option for those who want to host it great but make it an option in this UI we have a few different sections front and center is the chat window at the bottom we can enter in a message and press enter and then see the results in the main window right up at the top we can choose a model now this is a pretty critical step so I wish it wasn't as uh visually hidden on the left we see a big button for a new chat and then curiously sections for model files and prompts okay come back to those later and then there's documents and then below that is a list of chats you've had along with a search box for searching those chats let's ask a question why is the sky blue this is my go-to question and and my model is set to 53 which is a tiny model that is performing beautifully for so many use cases pretty quickly we get a pretty amazing response but I'm already getting ahead of myself a little little bit let's go back to the web page and try to figure out how to get this thing installed we can go to ama.com and then click on the GitHub link now this is the repo readme scroll all the way down to the end well almost all the way to the end and we get to the top of the community Integrations list and look at the web and desktop tools and here is open web UI at the top this takes us to the GitHub repo for open web UI looking at the readme it looks like like an incredible tool I see local rag integration intuitive interface code syntax highlighting theme customization markdown and latex support web browsing capability and so much more looks really really cool and this list of features just goes on and on and on well let's scroll up closer to the top and click on the releases link and this is interesting because we can see that it's not just one release there was a release just in the last couple of days and and we can scroll through here and we can see that there are releases happening on a pretty regular basis and they're not small releases they're not just documentation fixes there's significant new features that are being added which is incredible so let's look at the installation process now near the top is how to install and right below that is a quick start for Docker now not everyone loves Docker some people don't like it for I don't know whatever reason they might have misunderstood that licensing fuffle that happened a few months and and another one a few years ago thinking that they need to pay for Docker when most individual users don't need to pay anything but if you're working with Docker and olama is running locally then you can use this command- D says run this as a Damon process with the UI running in the background- P publishes the port 8080 in the container to Port 3,000 on your host if you'd prefer to access on a different port replace 3 3,000 with anything else the ad host parameter says to use AMA on the local machine- V creates a volume located at open web UI and mounts that to slapp SLB backend SL dat in the container name sets the name of the container and restart always means that if it Stomps for any reason it's coming back up and finally we're specifying that the container registry and image we should use is on gc. if you don't want to use Docker there are instructions you can follow over at this link but once you have it running you should be able to visit Local Host and the port which for me is 3,000 and here we have that sign-in page the first time we get here we need to create a new account so click sign up and enter your name an email and a password and click create now you're dropped into the UI the first thing you probably want to do is set the model to use come up here to this dropdown and type in the name of the model you want I have a lot of models available to me because I've downloaded and created so many here I've chosen llama 3 and now you can add a question let's go to settings which you can find up top here or down in the bottom corner at your name under General we can set a theme and language but then there's a system prompt I'm not really sure why anyone would want to set what I assume to be a global system prompt does this get overridden on most models because the system prompt is set in the model and there's no way to find out without digging around some docks and then it looks like we can set some more default parameters which you really want to be different for each model next is connections we can set an open AI API URL as well as an API key or update the URL to use for AMA and I can also add additional hosts not sure what happens when I have multiple hosts because I can't choose the host for model when I set the model to use so that's a little strange in the model section we can set the host to use to manage models it's not clear if this is different than the connection screen and even though I set a second host I only see the one default host here to the right is a button to update all models now unless you have a small number of models don't click this button it updates models in the most inefficient way possible pulling every model in your list if hung on my machine then you can pull a new model if you know the name of it though this is a much more Awkward way than the CLI you can also delete models from here now it's interesting that they offer a way to upload what they call models as the source ggf files but those are just the model weights and not the full olama model so they're a bit useless until you make that model from that and then you can do some management of models that you access through light llm in interface we have some functionality options and some default prompt suggestions I'm not sure what the value of this is I could imagine some default system prompts would be a lot better served here in audio we have the ability to set a speech to text engine either using whisper locally or using open AI but we have no control over which whisper models to use or which language you speak and then there's text to speech and again there's local or open AI the local one is all but useless for images we can use this to access automatic 1111 which is a bit strange but the author seems to want to support more than just o llama so that seems like it could be useful to some and that's all that's really interesting here so far everything feels pretty good about open web UI things work roughly the way you would expect and it generally seems to perform well but here's where we start getting a bit more interesting let's come up to the place where we choose a model there's this drop down but there's also this plus sign so if I click this I can add a second model and a third and a fourth and so on so what happens now well ask a question and you shall see why is a sky blue we wait for it to respond and we can read the answer take a look at the top I set it to use 53 and L 3 and so now at the bottom I see it showing two of two click the arrow it's now showing me the output of the other model so now you have a way of configuring multiple models and it's going to ask the same question to each there's no magic here it's going to take longer to ask more models but it becomes a bit easier to find the right model for you there's also a thumbs up and thumbs down button here the read me seems to refer to some ability to use this to make something better but there's no info on where that data is going I couldn't find any way to get the info out I looked at the SQL light DB that the app creates and nothing seems to have been added you can also edit the answer again maybe this is to help come up with a series of good questions and answers to help F tune later but the feature doesn't seem to be complete okay let's go up to prompts on the top left corner and click the plus sign to add a new prompt here you can add a phrase you want to use as the beginning of your prompt let's try summarize enter summarize for title and it will default to using the same thing for command and then enter the prompt write a short summary of The Following text click save and update now go to a new chat and type the slash key press enter to select summarize go grab some text from somewhere and paste it in and press enter and we have a summary but we had to go through a few extra steps there so go back to that prompt and add the word clipboard in double curly braces save an update and let's go back to the chat the text is still in our clipboard so type SL summarize and enter it has automatically taken what's in my clipboard and added that to the end of The Prompt well that's awesome but maybe you want to summarize it differently depending on the age of the person you're going to share this with ADD for someone who is and then age in square brackets years old go to the chat and bring that up again the text gets pasted in and age is highlighted waiting for you to enter an age you can add as many variables in square brackets as you like and as many prompts as you like now on that prompts page there's also a link to discover a prompt but it just takes you to the open web UI page you can scroll down to find a list of prompts folks have shared but it really should link you right to the right place even so I didn't find that all that useful I found the prompts tend to be super generic and it's better to come up with your own that said the whole slash command to bring up prompts is amazing and it may be my favorite feature of the app now let's go back to the chat now press the pound key or you may call it the hash key type https technov evangelist. now you can ask any question about the content on that page it doesn't seem to follow any links though it's just what's on that page so I can ask about my patreon at patreon.com techn evangelist but asking about my employment status with a results in a very generic answer back up on the top left is a section called documents click on it and then the plus sign to add a document you can add one document or a bunch unfortunately you won't get any feedback until the process is done but once the doc shows up you can add tags to the document now you can come back to the chat and type the hash key again and you'll be prompted to choose a document or a collection which is the tags associated with your Docs it seems to do a pretty good job here I wish there was some feedback but maybe that'll come soon there are some document settings but they don't seem to make any impact I change them and the next time I go back to this page they're changed back to the default maybe this is something else that needs to get fixed we saw how often releases are coming out so who knows maybe there's a new fix tomorrow the last thing we haven't looked at is the model files section I have no idea why this exist they should just make a better experience for working with models on the olama site because it feels like this is just trying to do the same thing just more awkwardly I wish this would make it easier to take any model than apply new parameters and system prompt give the model a new name and save it back to the AMA Library the CLI already does that but some folks prefer a UI so it would be a nice addition all covered everything so what do you think overall it's a nice tool but some of the weird aspects are hard to get past past the whole user thing is a pain though I'm sure they worked really hard on that I wish you could just disable that whole piece entirely what they're doing with the model file section doesn't make any sense to me but having a better workflow with models would be great and that button to update models is way too easy to press so either make it harder to press or update it to compare the hashes first and then do the pull the slash command to pull up prompts is pretty genius I love it but that's pretty much the only differentiating feature from the CLI is that enough to use it more regularly I don't know I know there are a lot of folks who are scared of the command line and will be okay with it being a one toone version of the CLI tool but that's not me so I don't know I thought this would be a short video when I started putting it together and I didn't expect to like open web UI as much as I did I hope this will be the experience with all the other tools on the community Integrations list but what do you think do you use open web UI is there anything I missed do you think there's a better tool out there share that with me in the comments below I love what folks share in the comments also be sure to subscribe to my newsletter you can find it at techn evangelist. newsletter thanks so much for being here goodbye this is a cool bottle from hydrate and so when I haven't drunk any water for too long it glows and reminds me to drink water
Info
Channel: Matt Williams
Views: 33,355
Rating: undefined out of 5
Keywords: ollama, ai, llm, localai, local llm, artificial intelligence, machine learning, local ai chatbot, how to run llms locally, local llm model, open webui
Id: 16fWf0VVeIo
Channel Id: undefined
Length: 16min 42sec (1002 seconds)
Published: Fri Apr 26 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.