Gemma is a popular large model now. It’s inspired by Gemini models at Google. You can use 4 open-source versions of this
model. Two of them are base models and two are fine-tuned
models. In this video, I'll show you how to use Gemma
with LangChain and Ollama. First, we'll take a look at Ollama. Next, we'll learn how to use an Ollama model
with Langchain. Finally, we'll cover how to perform an Ollama
Chat model. Before coding, let me explain the platforms
we'll use. LangChain is a framework that allows you to
build apps powered by large language models. You can think of this framework as the center
of generative AI apps. Ollama, on the other hand, is a tool that
allows you to run large models locally. Okay, we've seen two important tools for generative
AI. To use Ollama locally, you need to install
this tool on your computer. It is simple to do this. Go to the Ollama website and then download
it according to the operating system. After downloading it, all you need to do is
install it. That's it. After installing it, you can use it in your
terminal. I'm going to use vs code to write my codes. For Ollama, let's open our terminal and then
take a look at the version of Ollama. ollama --version There you go. To download a model from Ollama, you can use
the pull command. You can find the models on the Ollama website. To see models, click Models. There you go. For this video, we're going to use the Gemma
2B model. To download this model, let's write, ollama pull gemma:2b I already downloaded this model. To see the downloaded models, you can use
the list command. ollama list There you go. The Gemma model is here. To run this model, all you need to do is use
the run command. Let me show this. ollama run gemma:2b Yeah, now we can talk to the model. Let me write, Hi! There you go. You can see the answer here. Let me type "What is 2+2?" There you go. Nice, the Gemma model works very well locally. To quit, let's write, /bye It's simple, right? That's it. Let's go ahead and take a look at how to work
with Gemma using LangChain. First in first, we're going to create a virtual
environment. To do this, we can use conda. Let me show this. conda create, let's name, -n ai let's specify the Python version python==3.11, to accept the question -y Let me run this command. And our virtual environment started loading. It's done. Let's activate this environment. To do this, conda activate ai Okay, our environment is ready to use. What we're going to do now is install the
libraries we'll use. To do this, let's create a file named requirements.txt Let me click the new file and then type, requirements.txt Okay, our file is ready. All we need is to write the library names
here. langchain langchain-core langchain-community What we're going to do now is install these
libraries with pip. Let me write, pip install, to install dependent libraries,
-r requirements.txt Let me run this command. Yeah, our libraries are ready to use. Let's go ahead and create a notebook file. Let me click the new file and then let's write, gemma-ollama.ipynb Okay, our notebook is ready. To use Ollama with LangChain, we need to import
it from langchain-community. Let's write, from langchain_community.llms import Ollama Next, let's initialize an object from this
class. llm = Ollama(model="gemma:2b") Let me select the python environment, click ai. Awesome, our model is ready. Now, let's generate some text with this model. Let's write, llm.invoke("Tell me a joke") There you go. It is simple, right? If you want, you can use the print method
to see the output better. Let me copy this command. Adn then write, print. After that let's paste here. There you go. You can see joke here. Let's generate another text. To do this let's write, print(llm.invoke("What is 2+2?")) There you go. The answer is 4. You can generate text like this. You can also use ChatModels with Ollama. To show this, let me import the ChatOllama
class. from langchain_community.chat_models import
ChatOllama Now, let's initialize a chat model. llm = ChatOllama(model="gemma:2b") If you want, you can leverage the instruction
version of Gemma. What we're going to do now is create a prompt
template. For this, let me import ChatPromptTemplate. from langchain_core.prompts import ChatPromptTemplate After that let's create a prompt using
this class. prompt = ChatPromptTemplate.from_template("Tell
me a short joke about {topic}") Awesome, our template is ready. Now, let's use the StrOutputParser class to
output text. To do this, first, we need to import this
class. from langchain_core.output_parsers import
StrOutputParser After that, we're going to create a chain. chain = prompt | llm | StrOutputParser() Great, our chain is ready. What we're going to do now is call the invoke
method. Let me write, print(chain.invoke({"topic": "Space travel"})) There you go. As you can see, we generated text with the
ChatOllama class. To do this, we used a chain. Yeah, that's it. In this video, we've seen how to use Gemma
with Ollama and LangChain. The link to this notebook is in the description. Hope you enjoyed it. Thanks for watching. Don't forget to subscribe, like the video,
and leave a comment. See you in the next video. Bye for now.