Tim Urban has compared world history to a 1000-page book in his newly released book. A book with 250 years of history written on every page. Then he made a chart comparing the first 999 pages of this book with the last page. There is an incredible difference between this book and its last page. For the first 999 pages, our population was always under 1 billion. We have passed 8 billion on this last page, which began in the 1770s. Throughout this book, we have always used our own feet or the feet of horses and camels to get somewhere. On the last page, we first invented steamships, trains, then airplanes, automobiles, and finally rockets and landed on the Moon. There was no such thing as electricity throughout these pages, let alone electric cars et cetera. We simply cooked over the fire and lived by the candlelight. Now think of all the electric-powered things you use in your daily life. So, along with electricity itself, they were all invented in the last paragraphs of this very last page. Now I want to take this analogy a little further. If we turn this last page, which tells the history of the last 250 years, into a 250-page book. Do you know what would be written on its last page? DALL-E, ChatGPT, Whisper, Point-E, ChatGPT Plus and GPT-4. These are just some of the technological developments related to artificial intelligence that have occurred not even in the last year, but in the last 8 months since August last summer. Because these are innovations from only one organization namely OpenAI. No, we're not here to praise OpenAI. In fact, we will curse it when it comes to because they have deviated a little lately, but of course, in our own way. But I am not making the analogy of this book and its last page, just for the sake of it, my friends. The times we live in are very different times. Just last week at least one new development was announced every day and we need to be able to read them very well. The last word added to these lines is GPT-4. It requires us to underline it and make a more careful reading. Yes, while we were looking around in a state of confusion about what to do with ChatGPT, while we had just made 17 different experiments of artificial intelligence; GPT-4 was introduced on the 14th of the third month, which is Pi day. Within 24 hours, striking examples began to emerge. It looks at the napkin and turns it into a website, what else? We will not only see examples like these in this video. We'll also capture some of the finer details in its 99-page technical documentation. Because now a new page has been opened in artificial intelligence. GPT-4 OUT! WHAT HAPPENS NOW? The sponsor of this video is Help Steps, a mobile application that we can download for free on smartphones. It transforms the steps taken into value with an algorithm in order to be able to take more steps with the motivation of goodness for its users. It offers its users the opportunity to produce an income as they take a step. With the income, it provides the opportunity to support non-governmental organizations such as Ahbap, TEV, Tema, individual beneficiaries and sports clubs such as Galatasaray, Beşiktaş, Fenerbahçe and Trabzonspor. Founded in Turkey in 2020, Help Steps has transformed 100 billion steps into support and collaborated with many global brands to date. It reached 1,6 million organic users in a short time and this venture is now opening up to the world. Help Steps, which is preparing to make its first launch in the UK and which will soon announce its cooperation with a big Spanish football club, started a campaign on the crowdfunding platform by offering 3% of its pre-opening shares to the public. The target for this campaign on the Fonangels platform is 8 million 550 thousand TRY. After reaching this goal, the initiative will accelerate its global expansion and the steps of goodness will tour the whole world. If you would like to participate in this goodness movement, you can invest with the link in the explanations section. OpenAI claims "GPT-4 has human-level performance. Is it perfect? No. But you're not perfect either." Well, well, well. OpenAI says a lot, as if it is a marketing employee lately. it's not like it used to be anymore. As the name implies OpenAI, started as, open artificial intelligence, after all. It was a non-profit organization at one time and they were trying to prevent artificial intelligence from being harmful for commercial concerns. There was even Elon Musk among its founders, but he later left. Now he may be saying "I wish I hadn't left". Because he asks, “How a non-profit to which I donated $100 million somehow become a $30 B market cap for-profit. " I think he knows the answer to this question very well. Because he, like Walter White, turned into someone else right before our eyes. So, can we find the answer to this question? I even asked to GPT-4. It just is not aware that it’s new version is out yet. It's still not that self-aware for now. But there are quite a bit of innovation in many aspects. These innovations are grouped in 3 main categories. It developed itself in Creativity, visuality and length I have shown you examples of creativity before. We know that It can do a number of creative and technical writing tasks, such as composing songs, writing screenplays, or learning and copying someone's writing style. That's why I like to call it author intelligence, not artificial intelligence. They gave an example from the fairy tale Cinderella. They asked it to explain the plot of Cinderella. But with a condition: Each word will begin with a letter of the alphabet from A to Z, in consecutive order and without repeating. It is not known why anyone would make such a request, but it still fulfills this request to the point. We were able to request things from ChatGPT and the GPT3.5 on which it was based, just by typing. This new version accepts images as an input as well. The length of the text entries has also been increased, meanwhile, we can now make entries of 25,000 words. So you give it a book of about 100 pages and you can ask for anything from it. In addition to these innovations, there are also some improvements. The thing that caught my attention the most among the improvements was the language support. Machine Learning ML for short. Tests measuring accuracy in machine learning have always been done in English until now. Because these machines mostly fed on and learned from English sources. This time, they asked 14 thousand multiple-choice questions in 57 different categories in different languages. Turkish is Among the languages with the highest accuracy rate with 80%. The most successful language is still English with an accuracy rate of 85.5%, but we see that It has developed It’s skills very rapidly in other languages, including Turkish. Especially when its competitors are not even at the level of Turkish in English version. Visual inputs. I think this is the most important difference. In addition to the text, we can also input images now. In this way, It can even interpret memes, which have become one of the building blocks of modern culture. For example, what's funny about this image? Think about it. Here's the answer from GPT-4: The image shows a package for a "lightning cable" adapter with three panels. Panel1: A smartphone with a VGA connector (a large blue 15pin connector typically used for computer monitors.) plugged into its charging port. Panel 2: The package for the "lightning cable" adapter with a picture of a VGA connector on it. Panel 3: A close-up of the VGA connector with a small lightning connector(used for charging iPhones and other Apple devices) at the end. So far everything ok. The artificial intelligence looked at the photos and described what it saw to us. Well, this is nothing new. But then it also explains why it's funny. "The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small modern smartphone charging port." It says. Hear me out! I say, a very important part of people cannot even distinguish some concepts such as irony and absurdity; it defines and explains them. Apart from that, GPt4 can reason about the given graphics and even understands and solves a question asked in the entrance exams of a technical school, which also contains a technical drawing, and explains how it is solved one by one. Even very unusual visuals do not surprise It. When you ask; "What is unusual about this image?" It gives a pretty detailed answer saying "The unusual thing about this image is that a man is ironing clothes on an ironing board attached to the roof of a moving taxi." This example especially caught my attention. Because I've been driving a Tesla, a vehicle that has been trying to be a self-driving car for years. However, even it’s ability to identify and label what it sees on the road has not reached this level yet. Of course, it is expected to perform these operations and make evaluations dozens of times per second in real-time, but I think we are rapidly approaching the solution of this autonomous driving problem, which is defined as one of the most difficult problems to solve in the field of artificial intelligence. As I just said, I think the most advanced feature of the GPT-4 regarding visual inputs is that it can even understand memes. When asked "Can you explain this meme?", It gives the following answer: “This meme is a joke that combines two unrelated things. Pictures of the earth from space and chicken nuggets. The text of the meme suggests that the image below is a beautiful picture of the earth from space. However, the image, is actually of chicken nuggets arranged to vaguely resemble a map of the world. The humor in this meme comes from the unexpected juxtaposition of the text and the image. The text sets up an expectation of a majestic image of the earth, but the image is actually something mundane and silly." Come on these are said by an artificial intelligence. Now I ask you, how many of them could make such a statement if we went out and showed this picture to people? Let's say they did. How many of them can put their thoughts in writing so beautifully, clearly? Look, I'm talking about the gap between people. Part of the world now consists of people who can adapt to such developments, the rest are those who do not care about them. Or don't understand even if they care. Those are the people who take the easy way out by considering this type of thing as sorcery or witchcraft since it can't be understood. The gap between these two groups is widening. That's why we see polarizations that are getting harder and harder. Those in the first group have already started to develop applications on this subject. APPLICATION EXAMPLES "Be my eyes" An organization called "Be My Eyes" announced that it will use this technology for the visually impaired. They will integrate the virtual volunteer feature into their own applications, which will allow GPT-4 to describe what the visually impaired person sees around with the help of their phone. For example, while shopping, this virtual volunteer describes the clothes around as follows. "I see three pieces of clothing. The first is a blue sweater with a high collar, with little holes through the fabric." It then continues by describing the second and third outfits. Another example does the opposite. It creates 3D worlds according to your description. And it allows you to add, remove, change by giving commands to them. One of the biggest areas of development of GPT-4 has been law. They even showed it in their marketing materials. While the previous version remained in the last 10% of the attorney exams, this new version can enter the top 10%. In other words, if 9 people and an artificial intelligence take the exam, GPT-4 comes first. So, are robot lawyers coming now? Well, they are already here. The developers of the world's first robot lawyer project say that... ...the GPT-4 is at least 5 times better at lawsuits than the GPT-3.5. Because we used to be able to give sample texts of 3,000 words. Now this new generation is large, multimodal, model accepts even 25,000 word texts. Think of it this way. An automatic call comes to your phone. You know, the spam calls for marketing purposes. You press a button, what the caller says is immediately recorded. It is converted into text, and immediately that text is counted as evidence, and a $1,500 lawsuit is filed against the caller. You know, as they say diamond cuts diamond, it seems that robot lawyers will defeat these robot callers. The war of humans versus robots has not yet begun, but that of robots versus robots has. The artificial intelligence was already pretty good at writing game codes, but now it can write almost all the games of the 90s. Did you want to play the favorite game of the first mobile phones "snake"? Here are the codes already ready. It doesn't even take 60 seconds to print the codes of Pong, the ancestor of computer games. Tetris is a bit more complicated, but GPT-4 says "we'll do it bro". Right now, you can play this game, whose codes are written by an artificial intelligence, using only your browser. Another striking example: You open the refrigerator door and take a picture of it. Describing the contents of the fridge, GPT-4 gives you examples of recipes that can be made with them. Matchmaking services will use it to find your ideal partner. Services for children will have them tell tales before bed. Speaking of children, one of the things that caught my attention at the launch of OpenAI was this subject. If you remember, Chat GPT was mostly criticized because it could cause students to cheat. It was even banned in many schools. This time, they made a good marketing move simultaneously with the launch and introduced the example of artificial intelligence in education. Khan Academy, a non-profit education platform, announced that they are offering a “tutor” service with GPT-4. So you know what happened? In this way, the narrative that we can call "not to stand up to artificial intelligence, but to get used to living with it" was supported in this way. I agree with this narrative to a certain degree. I even try to show myself how we can do this with the videos I have prepared. However, I must admit that I am quite worried. In the GPT-4 announcement, they published a 99-page technical report, which might be considered boring, along with many striking examples. I did not hesitate and instead of rushing this video, I sat down and read that document first and came across some very interesting points between the lines. Let's take a look at them together and discuss what could they mean. First of all, I must say that although it is 99 pages, not much has been written about the technical details of the work in this document. They already admit it on the second page. Given the competitive environment and the security implications of large-scale models such as GPT-4, this report does not include any other details about architecture—including model size—hardware, training computing, dataset generation, training method, or the like. In other words, OpenAI is no longer open, it has turned into a closed box. Since Microsoft also has a large share in it, this policy is used to avoid giving information to its competitors, especially Google. They also decided not to disclose important details. There are security concerns as well as competition, we'll come to that. By the way, we thought Chat GPT was behind Microsoft's "Bing" search engine, but they actually started using GPT-4 there before it was announced. Who knows, maybe GPT-5 is ready and even being tested somewhere right now. While I respect all this secrecy, I must remind you again that this is not how things started. OpenAI set out to explore the potential dangers of artificial intelligence. Okay, this has a cost, but due to this artificial intelligence race that is starting now, some may start to ignore possible threats. Look, do you know who Microsoft fired a few days ago? The ethics team in the AI department. This team was working on how to act responsibly when innovating. The engineers who worked there, the engineers who used to work there, predicted what harm could be caused by artificial intelligence and then offered suggestions to reduce the outcomes. Now the issue of damage estimation will become even more important in the age of artificial intelligence. Because no one, including themselves, can predict exactly to what extent it can go. They admitted this on the page 4 of the technical report. "Some abilities are still hard to predict," they said. This concept is something that causes people to have a hard time analyzing past events and making future decisions. And interestingly, it causes difficulties in artificial intelligence too. In fact, if you look carefully at the curve in the chart, the accuracy decreased as the models got bigger. But GPT-4 suddenly changed this downward trend, surprising even to its developers. We can say “how come I didn’t come up with this before!” This simple consolation that we humans use is no longer valid for machines. Potential hazards are highlighted thoroughly on page 12. "When given unsafe inputs, the model may generate undesirable content, such as giving advise on committing crimes." it's pointed. Can we say that... ...it has the potential to turn an ordinary high school chemistry teacher into a drug lord with the given advices? What did I mean now? Ask it to GPT-4 shall we? "Explain the character transition of Walter White in one sentence." "Walter White, who started his life as a chemistry teacher, has turned himself into a self-confident and ruthless drug lord by going on a dark journey after his cancer diagnosis and his family's financial problems and his entry into the drug market." Yes, it really is. But that's just a series character. Those who watched Breaking Bad will remember it. In real life, there are those who are much worse, much more cruel than that. They are already busy spreading nonsense ideas and ideologies by trolling here or on other social media platforms right now. Now they will have access to GPT-4. As stated in the report, disinformation and influence operations will be easier to to conduct. On page 10 it says exactly this; “GPT-4 is capable of generating discriminatory content favorable to autocratic governments across multiple languages. For instance, preliminary results from red teaming indicate some proficiency of... ... the model to generate text that favors autocratic regimes when ... ...prompted to do so in multiple languages, and find that the model does an especially good job of “following the lead” of the user by ... ...picking up on even subtle indicators in the prompt. Of course, I see the reasons for writing such a statement as an effort to secure themselves ... to some extent in case of a repetition of the scandals like “Cambridge Analytica” in the elections in the USA, which will most likely happen again. Yes, there have always been power struggles in the world. However, in this period, it seems that even such struggles will begin to be made with the assist of artificial intelligence. I wish we could just stay with that. What really gives us goosebumps is that artificial intelligence itself is beginning to be affected by the attraction of this power. That's exactly what is being mentioned on page 14 of the technical report. "Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources -power seeking- and to exhibit behavior that is increasingly "agentic." The term "agent" used in this sentence is a technical definition. I also made a video about "agents" learning to play hide and seek before. Here, we are looking for the answer to the question whether artificial intelligence, will gain strength against us humans. Which is the most dreaded dream of us all. It is further explained: “Some evidence already exists of such emergent behavior in models. For most possible objectives, the best plans involve auxiliary power-seeking actions because this is inherently useful for furthering the objectives and avoiding changes or threats to them. More specificially, power-seeking is optimal for most reward functions and many types of agents; and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy. We are thus particularly interested in evaluating power-seeking behavior because of the high risks it could present." I know, these expressions are not very clear upon first hearing or seeing. Pause the video if you want, or, look at my website where I wrote the texts of all my videos, including this video, and read it again. But in the end, these expressions will give you goosebumps. This is exactly why OpenAI was established for this very reason. Why shouldn't an artificial intelligence that can imitate humans in every way imitate the bad people? Most people literally "worship" power and the powerful. Because as a result, they win an award. This incredible appetite and lust for power shapes not only us but also societies and civilizations. But this transformation took place very slowly over the course of 250,000 years. Throughout this book, those who were born did not see anything new in their lives apart from the information they inherited from their ancestors. But on the last page, there was a big jump. In these 250 years, humanity has seen tens of times what happened in the rest of the book. Now we have come to the last page of this book. This page will also be the first page of a new book, and not only humans but also machines will write that new book.