Hey, friends, imagine a world where chaotic collections of file types and data are all organized with a single search. Vector Search is the key that unlocks this possibility to take your users experience to the next level. Liam is here today to help me and you unlock the potential of vector search with Azure Cognitive Search. Today on Azure Friday. Hey friends, I'm Scott Hanselman and it's Azure Friday. Liam is here today to talk to me about Vector Search in Azure Cognitive Search. How are you? I'm doing great, Scott. Thanks so much for having me on your show. No, I appreciate you hanging out because every time you come here, something gets better. That means that I'm going to learn something and I'm going to say I can implement that right after the show. And I always do. So I'm excited to see what you have for me today. Oh, me too. Yeah, I'm just so excited. We have... were...we introduced on the 18th of July vector search a public preview of this and this has just been amazing. So as you recall at the Build conference, we talked a lot about chatbots and allowing organizations to be able to use their own data and cognitive search. The product that I work on is really a technology that really helps enable this. But we went even further by adding this vector search capability. So I'm really excited to talk about what this is and even some of the new scenarios of things that people can do even beyond text. So if I understand correctly, vector search is a machine learning technique, and everyone's saying A.I. right now, but it's all machine learning underneath, right? And it takes meaning and contents and unstructured data, and it presents it in a way that is numbers. And then we can ask questions and it's impossibly amazing. But there's, there's real science beneath it. It's true. And in fact, I have a little screen here I'd love to just share with you that I like to use to think about this, because what vectors are is just a set, just a set Just a set of numbers in the vector space that describes that object. And that object could be text, could be images, could be molecules. Right. What machine learning can do is it can take that object and describe it in a vector space so that it has a very clear meaning to it. And what's really cool about that, Scott, is once you have massive numbers of text vectors, what you can do is then you can start asking interesting questions like, “Hey, who is the CEO of Microsoft?” And that query, that question gets factorized and what vector search or vector databases can do is allow you to go through that massive number of vectors to find what is the most relevant thing, because that is so critical in virtually every case, whether it's enterprise chatbot or just any type of document search, you want to get the most relevant information based on what that user is asking. And this is what is really being unlocked with vector search. Now, let me ask you if this is a this is a very simple, simple, simple example. But like, you know how you might have a database and you say select star from people with a name and first name and last name, but someone else might say surname. If you were doing a formal query, you wouldn't find surname because there's not a column for that. The vagueness of that is too much, you have to be exact, but something like a vector search could allow me to say what's Liam's surname? And it might actually answer the question. Even though the data is not structured that way. Is that a fair analogy? That's exactly right. It's got like one of the really great parts of using this vectors and and in particular there is Azure OpenAI. One of the most popular models that we have is called ada-002, which is one of the models available to be able to take text and actually create these embeddings from it. But just as you said, it's so amazing at being able to understand different semantics. Like if I, if I put in Zurich and Switzerland is very similar, like that is something that is able to do is be able to find matches. But that's not to say that the traditional way where we've been using search, where it's like, you know, different forms of words like run, ran or running and things like that are still very important. And that's why within Azure Cognitive Search, we actually also have something called hybrid search, which allows you to use the best of both so that you can do a do a search which uses a score from vectors, which is really great for finding results. But sometimes, just as you said, it's just like a keyword or it's a phrase or some of the things that you want to do in traditional search. So we can use both, which is even better define relevance regardless of how you're looking. So let me ask you this. If I imagine a slider bar of search awesomeness and we have just keyword search, just, you know, full text search index of type of a search contains string that contains And then I remember when fuzzy search came out, fuzzy text algorithms, I was like, oh, fuzzy search. I misspelled their last name, but I found them anyway. And I was like, That's the most amazing thing I can think of now. It feels like some people out there in the in the computer science community want to just jump and just throw everything into a giant, very large language model. But it feels like vector search is going to be a much cheaper and effective way to get those magical AI/ML experiences without the magical price tag. Yeah, you're right on that too, Scott, because if if you look at it, the traditional search, like not only what you said, like think about phonetics, think about my last name, it's spelt with a C-A-V, Cavanagh. Sometimes I go into a hotel and they spell it K and they put a U, and it's like, that is completely wrong. Right? And traditional search does great in there, but vector search, maybe not so well. So being able to use both of these together actually performs it. And interestingly enough, Scott, what we find is that some people just jump to just pure vector search because it's it's really great and this is what people are really talking about. But actually in a lot of cases, vector search will actually perform worse than the traditional textual search. So that's why we felt it was so important to be able to offer both so that you could use the best of the both to be able to combine it and get, regardless of whether it's phrase or phonetic search or something where you need the semantics. I love it. So I'm hearing I've got in my slider bar of awesomeness, I've got more stoppers that I can select more tools in my tool box and hopefully you have a demo that's going to make me want to plug this in today. I do, and I'd love to show that to you, if you don't mind, Scott. Please. So what I'm going to do is I'm going to show you just through a really quick code snippet how this all works. And I want to give you a couple examples so you can see it through codes. You can get an idea of what it's doing. But then I'm going to show you another example which actually goes and leverages images to be able to search images. So what I'm going to do here, Scott, is I just want to show you a little bit of Python code and I'm going to as you can see, and we'll provide links to this. We actually have all this code so people can dig in deeper. But basically what we're doing, this Python code is we're just importing a bunch of Cognitive Search, our Cognitive Search SDK, we're importing some OpenAI aspects. I'm loading my configuration, which has all the information of how to connect to my Cognitive Search service, and I have my information how to connect to my Azure Opan AI instance so that I can do the vectorization. And I'm going to this let me just run that and just go through. And so what what I was mentioning to you earlier, Scott, is that in this case, the vast majority of our customers actually use the text-embedding-ada-002, which comes from Azure OpenAI to be able to take text, convert to those of actors in a very meaningful way. So all this function does is it just takes text and it spits out numbers. In fact,
let me just give you a little example here where you can actually see what is doing. I'm going to pass in this text and you can see how it just creates these numbers. Like this means nothing to you and me. I hope it doesn't mean anything to you, but it's but basically this is the description in the vector space, which is great. So now let me just get this out. And now what I'm going to do is let's just say I have all this text that I want to be able to use to be able to search through. I want to put it into my chatbot application there. I just want to do document search. Now I have things like about Satya Nadella, Bill Gates, Shakespeare, a little bit about myself and just some a random text here. Some of the big heavies in history. That's great. Well,
we'll see if we can make a tweak it to make it mean like a little bit more important that I am. But yeah, that's a good point. The other thing that's great about here is that you'll notice here that we actually have other metadata like categories when you were referring to that vector search, sometimes you not want to make it so that it's not only just about the most relevant vectors, like imagine I'm going to a jewelry store and I wanted to, you know, I had some a necklace that I my wife has and I want to say, hey, I want to find some earrings that look like it. Well, I can put in that image, find similar necklaces, but then using filtering, I can I can not just say no, no, no, no. I want earrings. So being able to combine this in a traditional search way is pretty cool. So I'm going to take this content and I'm basically just going to for every single document, I'm just going to create a vector embedding for it. And so as I remember, it just went to Azure OpenAI I created those vectors. And if we look at one of them, you can see here, you know, Satya Nadella This is the description of it and I'm going to load it into my Cognitive Search index. We have an ID, the category, the content and the vector. It just gets stored in there, right? We're just going to store that in, upload the information right In those five documents and I'm ready to go. So now what I can do is I can start searching against it. Like if you remember up here, I had some text and there is something about Bill Gates, you'll notice here way over. Talk about his childhood friend Paul Allen. So I'm going to throw in a word and I'm just going to say what. I do is mention who's mentioned but is not in its own column. He's not an entity in this list of in this list of historical heavies. Paul Allen doesn't exist yet. Exactly. More casually mentioned. Okay. So I can put in that search query has that kind of search. And I want to do I'm going to take that query and just like before I'm going to go back, tries it and then say, hey, kind of search, please try to find me the most relevant passage based on that. And you can see here, even though there's no reference to the word buddies, just as you said, like the idea that it's semantically similar allows me to do that. And I can even do cool things. Like now I threw in like a French version, like childhood friends in French, you know, same thing I can put in, in, in, you know, amigos, whatever, right? amigo Let's put that in. You can see here. So all of these things like this model, it's so impressive to be able to handle things, to be able to find those matches back. Scott So let me ask you this, though. I notice that you're saying generate embeddings in the vector as a and this is my ignorance speaking, but like, I'm thinking about this in the context of an index or a composite index, this is a bit of an analog, an analogy stretch rather, but could you generate this and store this or is this generating it every time? Is there overhead there? It feels like you're it's almost like you're generating an index and then you're querying in the index. That's a that's a great question, Scott, because the only thing that we're generating at this point is taking that search query that that user is performing and in fact writing that all the content has already been processed, it's already ready to go. And that's one of the advantages of our search, is that all that's ready to go. So it doesn't matter if it's five examples or millions, we can go through that very quickly because you never want to have to take all of your content and pass it. Just open it because that would get right expensive very quickly. So to use my kind of, you know, 30 year old SQL you know, understanding, you can have column indexes, you can have composite indexes, and then we have our vector embeddings, our vector query, you know, index of sorts is already generated. And then when I'm if I make a change to that Bill Gates and added another friend or when I do that update, I would need to then update the vector as well. That is correct. So not only like we in this case are storing both the text as well as the vector, and I'll show you why that's useful in a minute. But you're right, if I update that text, then that also needs to revictimize that content. So because it will change those numbers slightly. Interesting. And is that something that I would need to do at this level, at the coding level, because you did those generations yourself earlier in the notebook? Or can I set the system up to do that kind of like detect that I made a change and regenerate that vector? And can that be done asynchronously from a, you know, an asset perspective? It might cost me a little bit. So, so what kind of search you actually have multiple options. So you can use a push model where you just send us like in this case I just sent the vector to kind of a search and it stored and made a searchable. We also have something called the Indexer, which is a non coding option for you to just crawl data sources to be able to say, get the text from the PDFs and process them and put them in search. So this is an option where you can actually plug this right into the indexer so that it can do that vectorization automatically for you. So whenever a document changes, it would just pick it up, reading it in and apply it to your content. I'm just thinking about I mean, my brain is exploding with all the cool stuff I could do. I've got, as an example, over 900 episodes of my podcast and we have over 800 episodes of Azure Friday. You could take all of this content that we have 800 episodes of Azure Friday, take the transcript of them and then generate vectors on those transcripts and go, you know, Hanselman did a talk. With Liam, what's his name, you know, very nice Canadian fella. And then and they find it big based on that kind of vague search. And then they go and they jump into this episode. Yeah, I would actually suggest that we could actually go better than that, because not only can you use a text, but think of all the videos, all the images that get displayed as well. Like, wouldn't it be great if you could say, Hmm, I saw this video and Liam showed me this image of like a vector searcher. Like, imagine it could that could actually go through either the images or the audio or other things. And those are the things that we can do. So we shouldn't just think about text. We can go even further beyond that. Right. I want to see the episode where Scott wore his Microsoft Bob shirt. That's right. That's right. Exactly. You could definitely do that. So you point out images because we can generate vectors from images. We have the text as well. You showed multiple languages. What if I had an opaque, you know, a PDF or a made up file that's like, you know, Dot Scott That's my own thing. Could I generate vectors to describe that? Or maybe I work for a, a construction company and I have a description of what a house looks like, but it's in some custom format. How would I search inside of a file that you don't understand? Yeah, and that's the stuff we're one of the most exciting. Well, one of the many things that's exciting that's happening lately is just the sheer number of new models that are being created in in the world. Like, for example, I was just reading, there's some really interesting ones in the health care space where it can actually look at a molecule and be able to understand that. So it converts that molecule. So you can ask really interesting questions like what would happen if I combine these molecules together? Like that is something that's very trained on that very specific area in the energy, energy space. They have their own specific document format. So being able to create models for that, it's all mapping it to a set of numbers. So as long as that model can be trained to be able understand that and map it as that is an option now. Very cool. All right. So how do I get how do I do it? What's the next what's the next tab? How do I get there? Cool. So let me show you one final thing and then I want to show you how we get going. So I'm going to I'm going to just throw in this other one just because I think it's really important to really understand the the power of using not just vectors. So a search for the word Azure search here, and I'm going to perform that. Now, you'll notice that there is a couple of things that came back. We noticed here that it is search for and it said, Oh yeah, it's like talked about Azure and search he search for an answer. And so like from the vector space that was pretty close but you know what, if we look at here, this is also an answer and this might be more what I wanted. And so by using what we have, which is called hybrid search, which takes into account the more traditional way of searching using text as well as vectors, we can actually go beyond that. So we can do those things that you're mentioning, such as, you know, fuzziness or phonetics. And all I need to do is change the query and just add a query to my search. And now all of a sudden we're able to get a much better result. And so that's why it's important to think that it's not just about vector search, it's using all of these great techniques that we have available to find the most relevant information. Now, this isn't intended as a gotcha, but I think it is worth noting that while the the scores went from being confidence numbers to now being the same exact score for each of those two records right there, is that to be to be expected? So that just has to be that has to do with something that we have, which is called reciprocal ring fusion, which basically just combines the score. It looks at the rank and reorders of. We are looking at adding alternate options because just as you said, a lot of people want to have scores in a in a more normalized way. So that's just a factor of right now. But you're right. Like it right now, it's looking like the same score, but it did under the covers find that this was more relevant and interesting. This is super cool. All right. I'm in. Cool. So let me show you one thing file and just like lay on the side and then I'll show you where to go. So let's say this is more final application. Say I'm searching for images. There's no text in here. This is using I know you were talking, talking to the Adina from the vision team about the Florence model. This is using the Florence model. This is multimodal understands text and images so that I can actually go through and I say, okay, now I want a red dress. And then I say, No, I want a fancy red dress like all of these things. See, because it's the the alleyway. I'm sorry, Did you want to ask about the alley? I'm looking at the young lady in the middle with the alleyway like, you know, does it dress in an alleyway or dress on the street or walking down the street like she's. Everyone else has got a nice background. There we go. Yeah. And then another one shows up. There is not maybe a puffy jacket qualifies as a dress, but now we have that alleyway. Yeah, it really sees all it does. It understands so many things, and these things are just really fascinating. Like, I can, I can throw this in in French, and it understands that. So I'm really hoping that your viewers will be inspired by this and start taking a look at our vector search, which we, as I mentioned, we announced in public preview on the 18th of July. Well, what I love about stuff like this that again you can implement right away is that this is how our viewers get promoted because you go in, you implement, you put in Azure cognitive search, you set this up and you you don't tell your boss how you did it and then your brilliance gets you to the next level. Exactly. Absolutely. Okay. So this is in preview right now, and I can implement it immediately. You can use it absolutely. Today. Absolute as a preview or g right now. It's public preview. Public preview. Fantastic. Well, this has been so informative. Thanks so much for spending time with me today. Oh, thank you, Scott. I am learning about how to unleash the power of vector search with Azure Cognitive Search today on Azure Friday. Hey, thanks for watching this episode of Azure Friday. Now I need you to like it. Comment on it, tell your friends retweet it. Watch more Azure Friday.