This is an emergency video because OpenAI
held their first ever developer conference, so this better be something good, right?
Well, it is chock full of goodies, so hold on to your papers Fellow Scholars, first,
because this is GPT-4 Turbo. This is a version of their amazing AI that can now do 3 things
better. And there is so much more, my goodness. First, the context length has become longer. From
8-32k, to 128 thousand. 4x. Okay, but what does this mean? Well, this means that we can even
cram a 300-page textbook in there, and then, talk to the book. Or add internal company rules,
legal information, mathematics, video game ideas, anything you wish. Up to about 300 pages. Now
it will be able to handle all these. So good! Two, it now follows instructions better, three we
don’t have to rely on random answers that change every time we ask the same thing, as they just
introduced seed parameters for consistent outputs. And, +1, knowledge cutoff has been updated from
September 2021 to April 2023, so it knows about more recent things. Plus a promise that it
will never be so much out of date again. And all these changes are now part of
ChatGPT. It is now using GPT-4 Turbo. Yummy. And it gets better. Dear Fellow Scholars, this is
Two Minute Papers with Dr. Károly Zsolnai-Fehér. GPT now speaks in a more natural manner.
Also, when voice input came in, for me, it was an immediate game changer. It is
powered by whisper, we saw in an earlier episode that it can easily take a full,
several hour long podcast and transcribe it nearly instantly with minimal errors.
Close to as good as a human pro. And now, Whisper is also on version 3,
and it got even better. Wow. And check this out, this is actually insane.
It is now way better, and at the same time, it got cheaper. In fact, 2-3x cheaper than the
regular GPT-4. That is insane. Why is that? Well, we just noted in a previous episode that
it can give you great usability tips for your already existing products, and it costs
about a cent, and it will cost even less than a cent soon. On the very day I made that video,
they announce that they are making this 2-3x cheaper. That is insane. Speed improvements
are coming too. However, in the meantime, we already see that GPT-4 Turbo is incredibly
quick. Much quicker than any previous version. If this is supposed the slowest future
version of it, wow, sign me up right now! And there is so much more coming. The annoying
model picker is also gone. My early experiments seem to indicate that it will lean on
its own knowledge base when it can, and it only browses the internet for new
information when necessary. Great improvement. And now it is time. Time for the other
bombshell feature, which is not some new GPT, but GPTs. Yes, that’s right. GPTs.
Now you can build your own ChatGPT with your instructions and publish it yourself.
This will lead to lesson planners for 10s of millions of middle schooler users. Full
graphic designs from a single text prompt. Wow. And look, here is where your GPTs will
live. And from here, you can invoke Zapier, a system that has access to 6000 applications
and can control them. It can look at your calendar and summarize your day for you,
find conflicts in your calendar, and reach out to your friend and tell them that you have
some Scholarly duties for today and can’t go. So, how hard is it to build one of these?
This requires a great deal of engineering and programming knowledge, right? Well, not at all!
This is in the spirit of ChatGPT and can be done by anyone through natural language. Yes, we just
add the instructions for a startup helper bot, what it is for, and what it should do. It gets a
little identity, like these video game characters from this previous paper. It also gave a name for
itself, and with the DALL-E 3 system built in, it can also create a profile picture for
itself. And then, it will give us some intelligent questions on how it should behave.
Just think about it. A year ago, it required a gigantic company to release such a chatbot,
and now, anyone can do it. That is incredible. And, now what? Well, we can upload some
transcripts from previous lectures of Sam Altman, and then, the bot is ready for some questions.
Remember, this has the knowledge of GPT-4, which is an incredible body of knowledge,
plus fine-tuned with the lectures. And…let’s see…oh yes. The answers certainly sound
like Sam Altman. Now press publish, and there we go. Perhaps a Two Minute Papers
bot would also be fun to interact with. Later this year, an entire GPT store will
also be published. This might be the App Store of AI apps. Incredible, perhaps
a historic moment in AI research. I am so happy to be able to share this
moment with you Fellow Scholars. And if I understood correctly, you will
also be able to earn money with it. So you can create your own little doggie hotline
and perhaps even make a living out of it. And all this is open for all of us. At the end of the
keynote, there was a request through voice, to give a $500 credit for all of the attendees
of the conference. And then, this happened. So, where is all this GPT thing going?
Well, consider this. ChatGPT has a 100 million weekly active users. 92% of fortune
500 companies already use it. It is now part of our lives. It is everywhere. From paper
to this in just a couple years. Insanity. Now, there are also lots of recent
startups that are thin wrappers, thin layers over GPT. For instance, a
single programmer could build this doggy hotline with a few software tools. And
now, the pace of innovation is so quick, that essentially you can create these
wrappers through OpenAI. Just think about it. Now, one more thing to consider. A paper called
InstructGPT appeared in 2022 January. This was perhaps a predecessor of ChatGPT, and of course,
there is a Two Minute Papers video on that. It could do some things, but nothing too crazy with
today’s eyes. And wow we are approximately 1.5 years later, and look at all these improvements.
GPT learned to see, listen, hear, and speak. GPT-4 Turbo appeared, DALL-E 3 dropped. I have to be
honest with you Fellow Scholars - this does not feel like 1.5 years for me. This feels like
a decade of research to me. This was just a paper not so long ago, and now, 1.5 years later,
it is absolutely everywhere. People are already using it to create an AI commentator for League
of Legends matches, an online battle arena game, or you can even chuck a paper in there and talk to
it. Wow. So here, please make sure to invoke the First Law of Papers. The First Law of Papers says
that research is a process. Do not look at where we are, look at where we will be two more papers
down the line. Now we were not the first to report on this, and I hope that the additional context
was worth waiting a little. If you feel that it was, consider subscribing and hitting the bell
icon. Thanks so much! What a time to be alive!