Ok, so Streamlit announced on their blog that
they are holding a hackathon. The app should use a large language model
and be built with at least one of the following technologies. Additionally, it should address a real-world
problem. So, I thought, 'Why not create a Streamlit
app and submit my entry?' Over a weekend, I spent roughly 5 to 6 hours
creating the app. In this video, I will give you a behind-the-scenes
look at how I created it. The main goal here is to inspire you to create
your own app, which you could add to your portfolio or simply make for fun. And with that said, let's dive in. First, let me show you the app. As you know, there's so much educational content
available on YouTube. But if you've ever wondered how well you understood
the content of a recently watched video, you can use QuizTube. You just need to input the YouTube link to
the video and your OpenAI API key. With that in place, QuizTube creates a quiz
for you about that video. So, you can test your knowledge. If you get everything correct, you'll see
a balloon animation, and if you have some incorrect answers, those will be listed down
below. Okay, and in the sidebar, I've included some
additional information about the app and my social links. To show you how it works, I've created the
following Jupyter Notebook for you. Basically, I've broken down the app into four
steps. The first step is to retrieve the YouTube
video ID from the inputted link. I needed that video ID for the second step. So, instead of writing a complex function
to extract the ID, I simply used the pytube module, which offers that functionality. Here are a bunch of differently formatted
YouTube links, but when I use the extract method, I get only the video IDs. Now, I already knew that I wanted to use OpenAI
to create the quiz. But as you know, that would mean you need
to first get the transcription of a given video and then feed that text to OpenAI. I considered downloading the video, converting
it to audio, and then converting that audio to text. But then I stumbled upon another Python library,
which essentially extracts the captions from the video. Of course, there's a limitation: the video
should have captions available. But I didn't want to make the app too complicated,
so I thought this was a quick and easy solution. For this to work, you just need the video
ID, which we already extracted. With that, I can easily obtain the captions
of the video. The format looks like this: we have the text,
the starting time, and duration. However, I'm only interested in the text,
so I concatenated the text, and this is what I got back. This is now the whole transcription. Okay, with that in place, we can now feed
the text to the large language model. For that, I used LangChain and specifically,
their prompt template. I had to tinker around with my template text
a bit. Initially, I wanted to get back the quiz in
a dictionary format. But that didn't work out as expected. I actually had better results when I stored
the quiz in a list of lists, as specified in the template. So, I just ran the chain with the transcript,
and here's what I got back. Now, this output looks like a list of lists,
but it's actually just a simple string. But with a built-in Python module, you can
transform this string into an actual list. So, when I print the data again, you can see
it's cleaned up and that the type is a list. Alright, and that's basically it. Once I had that working, I created the Streamlit
app. On my main page, you'll notice at the top
that I'm importing some helpers. These are the exact functions I showed you
in the notebook, but with some additional error handling. So, I have a file to create the quiz using
LangChain and OpenAI. Then, I have some utilities to transform the
quiz string into a list, and another utility file for the YouTube steps — specifically,
getting the video ID and the transcript from a given YouTube URL. Lastly, I also have a fun helper file for
my toast messages. This is actually a pretty new feature of Streamlit. Let me show you how I use it. When I reload the page, you'll see a small
toast message pop up in the bottom corner, which disappears after a few seconds. I asked ChatGPT to generate 20 of those messages,
which I load randomly just to add a bit of flavor to the app. In my main file, I first set some basic page
configurations before loading the toast messages, as I just showed you. After that, I've got my whole code for the
sidebar, which is mainly markdown. Then comes the title of the page and some
more markdown text to explain the app's purpose, followed by input fields to gather the YouTube
video link and the OpenAI API key. With that information in hand, I can call
the different helper functions I've created. And to be honest, ChatGPT wrote most of the
part below for me. What I'm doing here is loading my quiz data
into session state variables, then displaying the quiz to the user. Of course, I also need to keep track of the
user's answers to calculate the results. But that's essentially it. To make the app more user-friendly, I included
a short tutorial. So, in my app, I have an expander element. When I open it, there's a quick tutorial hosted
on YouTube. I'll admit I was too lazy to do the voiceover
for that tutorial myself. So, I turned to ElevenLabs, where you can
generate free AI voiceovers. All in all, ChatGPT saved me quite a bit of
work on this project. I basically just had to figure out the steps
to generate the quiz, read a bit of documentation for the different packages, and then pose
my questions to ChatGPT to get the code. I hope this video inspires you to create your
own project. By the way, you can check out my source code
if you'd like to reuse it for your app. You can find the link in the description below. As always, thanks for watching, and I'll see
you in the next video.