history of the entire AI field, i guess

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
artificial intelligence this phrase not only describes something that transcends our technological imagination but also bypasses the boundaries of one's actual scientific knowledge otherwise known as a buzzword probably as one of the greatest buzzwords of all time the history of AI mirrors exactly how you would describe something as overhyped overrated and overestimated big Corps continuously jump onto this bandwagon because of fomo and unsurprisingly great disappointments were met because of unreasonable expectations and predictions yet for decades it remains a field where some of the greatest Minds have pondered upon so how does artificial intelligence attract so many individuals to study all the way back when computers were just giant hole punching machines well let me share with you a tale of corporate greed betrayal and a bunch of geniuses who paved the path to the AI that we all love and hate today when I first came across philosophers such as Aristotle Euclid Descartes leibniz Hobbes I thought they were just people that just sat around and argue about I think therefore I am and just discuss their existential crisis in a well-formalized manner but why I missed in their discussion throughout ancient Greek to the early 20th century is that the concept of formal reasoning is something that many of us do not appreciate as much but it is probably the foundation in building a true artificial intelligence how so let me explain artificial intelligence is based on the assumption that human thought can be mechanized so naturally logical reasoning and rational thinking are something we want to reproduce artificially because it resembles the intelligence that we humans have however in order to do that a physical symbol system is needed in place first it has to take physical patterns and like our thoughts second to be able to freely combine with itself third be able to be manipulated to create a set of expressions hmm that sounds like math right so yeah math came into the picture if you think about it math itself can be read as a formalized language you can't really disagree on one apple plus one apple equals two apples right and when you have a line that contains an equal sign it transforms into a statement showing that the left side is equal to the right side whether the equation itself is true or not is another story but the ability to express and reason its way out is what makes math possible for formal reasoning in the mid-19th century the study of mathematical logic which now contains theories such as the set theory was presented by bull and Fred then Russell and Whitehead took inspiration from their work and wrote one of the most important books in the history of mathematics and philosophy principia Mathematica its presence made artificial intelligence seem like an approachable problem that can be solved scientifically not some belief system that cannot be physically verified though this further raised the question can all of mathematics reasoning formalized and an answer was devolved soon after which is called the turing's machine turing's machine is a theory that given a machine as simple as shuffling zero and ones back and forth it can imitate any conceivable mathematical or even abstract symbol manipulation this simple system demonstrates the possibility of a thinking machine because of its unlimited mathematical capabilities this Theory definitely piqued the interests of many scientists which led to the theory of artificial neurons in machines similar to how our brain neurons work introduced by Pitt and McCulloch in 1943 and eight years later snark short for stochastic neural analog reinforcement calculator was born created by a young and aspiring graduate student Marvin Minsky his work was later known as the first ever artificial neural net machine that was ever created Minsky's machine is made out of vacuum tube circuits and the goal of the machine is to solve a maze by simulating itself as a rat in order to simulate artificial neurons he used vacuum tubes and connected them with synapses randomly to simulate 40 different electronic neurons the memory of the machine was stored based on the positions of the 40 control knobs then when the machine is learning the clutches would adjust its own knobs which create something like a short-term memory and changes the probability of the circuit going through an electronic neuron so when correct paths are taken it will be reinforced by providing positive feedback to the machine and the Machine will learn to use that path more often the next time pretty cool right with this incredible work under Minsky's belt he became one of the most influential and important figures in the realm of AI and at the 1956 Dartmouth workshop and its proceeding conference he organized with McCarthy Shannon and Rochester Mark the founding moment of the academic field called artificial intelligence this this conference is considered to be the birth of AI where it gained its name its Mission and its first success in a Dartmouth conference Newell and Simon introduced the first artificial intelligence program called the logic theorists it was the first program to perform automated reasoning and at its peak it was able to prove 38 of the first 52 theorems from the principia Mathematica logical theorists was the first step where they hoped to create a machine that could think although the program felt short because of its inflexibility the attempt brought a very important Concepts and problems that the AI field will soon face Concepts like reasoning as search where for example you search through every possible route in a maze while backtracking if you meet a dead end and heuristics which is to prevent an infinitely large amount of backtracking and select a few routes out of many that might potentially solve a maze so while all these problems are looming around this is where the real video started and went out of control [Music] [Applause] [Music] right after the announcement of the new academic field called Ai and some really impressive work was made Newell and Simon created a general Problem Solver which took inspiration from its predecessor logic's theorists but aimed to be actually generalizable the learner's Geometric Theorem prover and Minsky student slangles Saint to solve problems in geometry in algebra barbaro's student which uses natural language processing to solve English High School level algebra word problems Stanford's stripe system to control the behavior of the first general purpose robot shaky the first chatbot Eliza that can fool humans and robot project that created a humanoid robot which has limb control system hands tactile sensors vision system artificial eyes ears and movable mouth these amazing fees attracted the arpa which is a US state agency and now known as DARPA to provide funding for AI research up to a total of 20 million dollars for over 10 years the future looks bright right well here's the problem these programs and machines work well in theory but to be used practically it was no more than a party stunt in one of the more well-documented work the chat bought Eliza people described the program as it could carry out conversation so realistically that the users will fold occasionally a rather ambiguous description for the audience to interpret how well it actually performs you see what Eliza was actually doing simply giving back a predefined response or rephrasing the question and repeating it back to the user it has no idea when talking about let alone the possibility of being intelligent it was pretty much programmed to simulate a human conversation which at the same time key rise to a really important concept the Elisa effect the Elisa effect describes that humans tend to unconsciously assume that computers behave like humans for example when you swipe on a credit card machine and it displays a big thank you text on top of it do you really think that it's expressing gratitude to you of course not it is pre-programmed to display a set of symbols which is meant to be understood by humans for the machines they are just processing zeros and ones with the electricity that's generated at your local power plant so in essence the Elisa effect is a phenomenon where the user perceives the computer system as having intrinsic qualities and abilities which then overestimates a greater reasoning and judgments under the hood while it was all but a simple if statement by using a real person's name and framing it as a chatbot or even an artificial intelligence it is bound to mislead those who might take these words at its face value what's even worse is that this is a common occurrence in the field of AI research and with fortunately be a repeating theme of this video it's code is it snowing outside no is the temperature low no wake up it's the middle of the summer in 1974. DARPA stopped funding four years ago and most the researchers are gone it's the AI winter combinatorial explosion still poses a problem in most of the newly developed methods more of it says Paradox where contrast to reasoning sensory motor and perceptual skills requires enormous computational sources to learn Common Sense knowledge and reasoning which contains pretty much an infinite amount of tiny information that an AI needs to learn but most importantly the lack of computational power was not able to realize the ideas which many researchers have almost a decade has gone by where there were barely any progresses made and what's even worse is that Minsky's schoolmate rosenblatt published a supervised learning algorithm back in 1958 called perceptron it's a binary classifier based on Minsky's neural network research and it was later in 1969 personally criticized by Minsky in his book perceptrons Minsky claimed that rosenblatt's research on what perceptrons could do is completely exaggerated and Gravely criticized the concept of connectionism and the neural net which was the idea that he came up with in the first place it looks like a stab in the back to all the colleagues and students who looked up to his work and the influence was so devastating that for the next 10 years not one single connectionism AI researcher was published at all and funnily one revived the field of AI research was known other than connectionism itself during this 10 years of AI winter a lot of the Next Generation researchers have slowly picked up their Pace too and also started a new storm of AI in 1982 hopfield introduced hubfield net a form of neural network which proved to be able to learn and process information without memory loss then later on in 1986 this concept would evolve from Roma heart's work and become the recurrent neural network a neural network that is often used nowadays in speech processing and recognition then in 1986 back propagation was popularized by Hinton and Rommel heart which is definitely a simpler name than reverse mode of automatic differentiation it initially had back when it was proposed by linama in 1970. soon after in 1987 the first ever image recognition system the net 1 was created by Lagoon by combining the idea of convolutional neural network introduced by Fukushima in 1980 with back propagation on top of gradient descent the main recognizing alpha numeric characters possible and can even identify ZIP codes in handwriting the net 1 was probably one of the few well visually documented research too since what remains from the majority of the earlier AI research is left with little to none only with an occasional view with photos as physical evidence of their work so with these new AI Technologies rising up left and right the possibility of creating something amazing like Lagoon's work which turned out to be so useful that in the late 90s around 20 of the bank machines used it to process large check readings became a lot of people's drive and dream to create million dollar AI software and while people are out there tripping on the capitalism route of overhyping up the AI Tech and it's worth some very interesting Solutions were proposed to solve the common sense knowledge problem wait wait and there's this one solution that is actually my favorite a solution by creating a massive database of all the mundane facts that an average person knows and feeding it into a program while I am pretty sure we all know how that's gonna turn out but technically it did work in 2021 then in comms x-con developed as the first expert system that avoids the common sense knowledge Problem by restricting its knowledge domain x-con basically prevents wasting money on returning product because back in the days computer peripherals like cables and printers were not sold together and the sales people are not exactly the tech people so mistakes such as submitting the wrong orders for customers with an incorrect cable or printer is extremely likely and costly Exxon was simply acting as an intermediate before salespeople make orders and pretty much just a system in place to check if the specifications fit together how it works exactly will probably remain a myth forever but it definitely sounds over engineered this also points to one of the problems where closed Source research or privately funded projects were all the valuable knowledge will mostly be lost due to the Relentless corporate greed or if we're national security reasons according to DARPA however the future still looks bright so a person could could work this and never have to read a manual absolutely absolutely just definitely not the world in the 1990s it was not ready for the change even after all these Innovative developments the practicality of AI was Far Behind the state of the technology firstly the hype around the expert system spiraled out of control it was kind of useful in a very specific context but in general it requires a lot of money to maintain is really difficult to update impossible for the expert system to learn and only a few large corporations have the resources and the computers to run it to be completely honest the expert system is really a phrase that overshot what it can really do practically and it worked perfectly as a buzzword where you can easily pitch the company's stakeholders and let their imagination run wild then flushing Millions down the drain secondly a lack of computation power even though the technology in the late 1980s has improved from a hole punching machine to some nice electricity running through some circuits Moore's Law still tells us that the computational power is inefficient at making any major AI research progress apple and IBM were Rising though but AI specialized Hardware became pretty obsolete because of mass production of commercial PCS where you can do the same thing third of all the qualification problem this concept is addressing one of the difficulties for developing true artificial intelligence think of the qualification problem like the Absurd Lee specific requirements needed for an AI to Output a desired action in the real world let's say you want to cross a river then one of the ways is using a boat but you will need to craft the boat and in order to craft a boat you need wood and the boat also needs to be waterproof and you will need a paddle to move around these will be the requirements or like the qualifications for crossing a river but if there's no wood another solution would be needed we humans can probably think of an alternative solution right away but for a computer there are an infinite number of requirements and qualifications for crossing a river and it's pretty much impossible to determine what is relevant and what is needed but even if we list out all the possibilities for the AI it only solves a problem and that is crossing a river what about crossing a road then with two legs does the AI even have two legs so even if we can use AI heuristics to solve the surgery problem and bypass combinatorial explosion in order to find a way to cross the river the qualification problem still stands and we can't even get it started to cross because there are infinitely many preconditions that the AI needs to consider before Crossing so yeah just like a classic pattern of an overhyped train it all went downhill fast and hard the bandwagon effect caused overly optimistic predictions where businesses were established to create solutions that probably do not exist and large corporations spending Millions on an AI that can just be solved through an algorithm which is also expensive to maintain so by the end of 1993 over 300 AI companies had shut down gone bankrupt or being acquired which marked an end of the first commercial wave of AI but hey the chess AI high-tech and deep thought defeated Chess Masters in 1989 for the first time in history compared to the second AI winter that's pretty neat right while the second air winter definitely did not look like the first AI winter where no progress was made for 10 years this time capitalism came to the rescue companies would spend money to continue a variety of different AI related research behind the scenes such as data mining Logistics search engine robotics speech recognition medical diagnosis and many more and even though some are extremely stingy about sharing the progress they've made it did indirectly support the growth of AI related knowledge so in the middle of the 1990s the AI field has matured and developed into a much more rigorous scientific discipline and created the possibility of collaboration between other scientific fields at the time Concepts such as Bayesian networks hidden Markov models information Theory stochastic modeling and classical optimization were also being popularized precise mathematical descriptions were also developed for neural networks and evolutionary algorithms which further set a great foundation for the AI field in general this is also the time where researchers realize that finding Solutions in specific isolated problems is much easier and more successful than finding the solution for General artificial intelligence this may be due to Moore's Law playing a role in helping the progress of AI but everything was going very well in 1997 IBM's deep blue defeated World chess champion Gary Kasparov and in 2011 IBM's Watson defeated the two greatest Jeopardy champions but as you can see these AIS are no longer the general AIS that initially motivated the researchers to do it has diverged into doing different specific tasks as Bostrom said a lot of cutting edge AI has filtered into General applications often without being called AI because once something becomes useful and common enough it's not labeled AI anymore this observation may be due to the fact that AI is a term that is not specifically describe what a program can do and at the same time overestimates the ability of your program and can cause all over Promises by surrounding your program with the phrase AI as New York Times reported similarly in 2005 computer scientists and software Engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers and the term AI was kind of left as a general term and even transformed into a buzzword while other AI related terms began to sprout out such as informatics knowledge-based systems cognitive systems or computational intelligence to make sure that it is identified independently apart from Ai and it's the 2010s computational power goes internet is accessible in our pockets CS go just came out and the economy of scale on computers is beginning to fall The Big Data age was born never before do we have so much unique data on the internet that is publicly available and accessible from being able to look up food recipes online to your own personal details like your phone number and your address a very large collection of of data was suddenly available for research and data mining and this brings up the big data to be more precise what big data is different from your usual statistical analysis is that it doesn't take samples in a large group of data it instead uses every last drop of data and processes information out of it some may become ad personalization some may become Behavior analytics as some may become YouTube video suggestions and these are not tasks that conventional algorithms or softwares can easily use and manage AI was the key for understanding this large amount of data and companies are all in for that on the other hand deep learning became the new cool thing by adding layers AKA hidden layers inside your neural network you can add a lot by the way it was actually better at avoiding overfitting compared to more shallow networks abstractions were able to be made because of its high amount of layers and sometimes are used to encode or decode information such as images there is this classic analogy where each hidden layer provides a different level of details for instance if it's an image of a number the first layer may contain information such as a small segment of line then the next layer can be a curve that's combined together then the curves combined together becomes a shape then eventually represents a number in the end pretty neat right other deep learning methods such as the long short-term memory or lstm which was introduced by schmer Huber and horse rider in 1995 evolved throughout the start of twenty thousands it became the key to solve recurrent neural networks Vanishing gradient problem a problem where there are so many layers in a neural network that the gradient will get so tiny that at some part it will be rounded off to zero because of computer limitations hence disappearing but yeah the progress was starting to speed up as state-of-the-art deep learning based computer vision can rival human object recognition accuracy already back in 2012 and compete in games such as gold chess Doom Dota 2 and outperforms human players but at the same time deep learning was like a black sheep in a herd it was heavily criticized in the field of AI research because of its ambiguity and its mathematical nature Russell and norvik even stated in their 2020 AI textbook that the presence of deep learning may represent a Resurgence of the scuffles because of the lack of rigorous proofs it has and to be honest with you I kind of agree with that it is often being taught that the amount of hidden layers and nodes are pretty much Irrelevant in training or optimizing a deep neural network While most people still like to define the amount and base 2 or base 10 but you can basically just Define the amount based on the temperature today and it wouldn't even matter that much even if it's in Fahrenheit or Celsius this ambiguity definitely gave some researchers the feeling of progressing backwards with the increase in the Reliance of keep learning but still ever since AI were used to solve specific isolated problems major progresses have been made even until now multimodal learning like text to image synthesis 3D reconstruction questions with Ray marching and volumetric rendering or making major progresses in 2022 and we still have four months left and going back to the initial goal General AI or more accurately artificial general intelligence AGI ever since the success in solving isolated problems AGI has largely been given up in the late 2000s there is still a long way ahead before a machine can completely imitate a human or human thoughts in a more accurate term called Full AI was coined referring to a full artificial general intelligence and also with the term strong AI used to refer to machines capable of experiencing Consciousness but how to accurately evaluate Consciousness is definitely another problem the notion of AGI may be full of problems and paradoxes right now and though we are still struggling to approach theoretically Foundation models where a model is trained on a super large amount of data like gpd3 or bird has shown some similar quantity to the goal of AGI maybe this could pave a way into the The Field's initial goal or could it be another complete dead end as large language models are just a myth and the Google employee is just overreacting to it and while the freeze AI is a buzzword that is still going as strong as ever many still use its ability to convey a huge amount of information to others in such a short amount of letters even myself included but rather than restricting the usage of this phrase ourselves to prevent a further misinterpretation or misunderstanding why I feel like is more important right now is to educate and widespread the current capabilities of what artificial intelligence can do and cannot I may not be that good of a speaker writer or an educator but hopefully I've contributed a small part into demystifying this favorite sci-fi term for many thanks for watching if you want to learn more about AI from a scientific approach but easily today's sponsor brilliant actually provides the perfect place for you after all these years of studying math and neural networks from textbooks it became dead obvious for me that Interactive Learning is probably the best way of learning however it's pretty much impossible to obtain any of that since these are the fields that are highly complicated and with the rapid change in schools curriculum the only way to learn was to dump all the information into your head no matter how long it takes this is where brilliant comes in brilliant provides you with thousands of high quality interactive lessons in crazy hard stem field subjects ranging from math science and computer science why I said brilliant is a perfect place to learn AI practically is because they also have these mind-blowingly cool interactive lessons where you can see the exact Bare Bones of the neural network visually and how they work exactly and actually use brilliant during my high school too it made understanding calculus so much easier for me and helped me realize that imaginary numbers ain't as comp located as my textbook explains college level contents are also available too like multi-variable calculus for your college students suffering out there or algorithms clearly explained with some very neat diagrams I can tell you with my personal experience that learning on Boolean would definitely be a great time so yeah quickly get started on brilliant by heading to brilliant.org by Cloud to get started for free with Brilliance ever expanding interactive lessons and to also support this channel the first 200 listeners will also get 20 off an annual membership thank you for watching a shout out to Andrew lustelius Chris LeDoux Dan Kennedy sean7134 and many others that support me through patreon or YouTube which make these long videos possible if you have any questions feel free to join my Discord and ask there follow my Twitter for some cool monthly research paper rankings and I'll see you in the next one
Info
Channel: bycloud
Views: 36,852
Rating: undefined out of 5
Keywords: bycloud, bycloudai, history of ai, history of artificial intelligence, artificial intelligence, the history of ai, history of machine learning, history of deep learning, history of artificial intelligence in english, AI, machine learning, Yen lecun, geoffrey hinton, marvin minsky, aristotle, backwards propagation, history of computers, history of computer science, SNARC, AI winter, XCON, expert machine, AGI, CNN, formal reasoning, lenet-1, connectionism, Dartmouth conference, Perceptrons
Id: b9chqJ2TgzA
Channel Id: undefined
Length: 26min 45sec (1605 seconds)
Published: Thu Oct 13 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.