Season 2 Ep 23 Twitter Q&A with Geoff Hinton

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] in last week's episode we had jeff hinton on the show we covered so much ground from the early days when very few people were working on neural nets and deep learning through the imagenet alexnet breakthrough moment through jeff's current work and vision for the future of ai as you might recall we also gave you an opportunity to contribute questions through twitter in today's episode we'll discuss some of these questions with jeff but before we dive into our very last episode of season 2 i just want to say it has been such a pleasure and honor to have so many amazing guests on the show this season we had guests explaining how ai is being used in real businesses today like florida tassie on building ai for customer service amit prakash on helping companies use ai to make better decisions and benedict evans on what really matters about tech today there were guests using ai to solve major health issues like george netcher on using ai to protect the elderly with fall detection aethelus tinay tandon on using ai to improve blood testing andrew's song on how ai's helping give back hearing to people worldwide and prom hedge on using ai to improve training and prevent injuries in sport we had guests using ai for social good luck ianna howard tackling bias in ai revolution robotics chaired schrieber on teaching children about ai robotics and david ralnick on using ai to fight climate change we also had guests who are using ai and industry giants like microsoft's eric horvitz on using ai for the greater good and shakir mohammed from deepmind on weather prediction there were guests using ai in consumer applications like spotify's gustav server strum on ai and delivering personalized experiences amid agrawal from the yes on using ai to serve up a better experience in fashion and etsy's mike fisher on ai in e-commerce we had guests using ai in transportation and futuristic vehicles like adam bree on using ai to power skydio drones mit's kathy wu on the future of our roads and alex kendall on waves driverless cars we had guests making ai accessible to all through open source like ross whiteman and hugging faces claymond long and we kicked off and ended our series with academic leaders in the field like sergey levin from uc berkeley on our current research challenges in ai and of course last but by no means least jeff hinton speaking of which let's get to your questions for jeff but jeff thank you for making the extra time for um audience questions especially the first time we do this on on the podcast and we had so many questions on twitter for you it's clear so many people want to learn from you and have questions for you hopefully we can get through a bunch of these questions let me kick it off with a question from somebody you are very familiar with eliasgiver what were some of the hardest times research-wise on the path to making deep learning work was there ever a time where it just wasn't clear how to even make the next step i think there what it was always the case that there were things that worth trying even if they didn't work and consistently didn't work um i never reached a point where i thought there's just i can't see where to go from here there were always many possibilities many leads to kind of follow up and most of which ended in dead ends but um i think good researchers always have like dozens of things they'd like to try and they just don't have time to try them so for me there was never a point where i thought it's completely hopeless there's particular algorithms that at times i thought they completely have beside boltzmann machine learning um sometimes i think it's hopeless sometimes i don't um but the whole enterprise of which i could now phrases can you find objective functions and get their gradients so that you can learn by stochastic gradient descent um that whole enterprise always seemed to me to be um there were always directions you could go to push it forwards and i have a second question from elia a very different question are you ever concerned that ai is becoming too successful and too dominant uh yeah the two things that concern me most are its use in weapons because that will allow countries like the united states for example to have little foreign wars with no casualties by using robot soldiers i don't like that idea even worse its use in targeting particular subpopulations to swing elections so this kind of stuff that was done by cambridge cambridge analytics and that i believe was very influential in both brexit and the election of trump um i think it's very unfortunate that techniques like deep learning are going to make that kind of operation more efficient a question from pouria mistani is deep learning hitting a wall will agi be achieved by scaling up neural connectivities in deep learning architectures it won't be achieved just by scaling up neural numbers of parameters or neural connectivities but it's not hitting a wall um i recognize where that quote comes from um it's a sort of attention-grabbing quote um and this is regularly it's said that deep learning is hitting war um and it keeps making more progress and if any of the people who say it's hitting a wall would just write down a list of the things it's not going to be able to do and then five years later we'll be able to show we've done it yeah i like that notion that anybody who wants to claim something's hitting a wall to make a list of things you cannot do and then that's great inspiration for all the rest of us to see if we can can make it happen or not and but it has to be fairly well defined what it can do like there was hector levesque who's a symbolic ai guy and a very good one um actually made a uh a criterium which is the winograd sentences where you say things like the the trophy would not fit in the suitcase because it was too small versus the trophy would not fit in the suitcase because it was too big and if you want to translate that into french you have to understand that in the first case it refers to suitcases in the second case it refers to trophy because they're different genders in french and the early machine translation with neural nets was random it couldn't get the gender right when it translated to french it's getting better all the time but at least hector made a very clear definition of what it would mean for a neuron like to understand what was going on and we're not there yet but we're we're i think we're considerably better than random though that i'd like to see more of that by people who are skeptics great challenges yeah next question is from eric jang actually one of your colleagues at google what are three questions that keep you up at night not necessarily restricted to machine intelligence when is the attorney general finally going to do something that keeps me up at night um because time's running out by what that's what i worry about most how does the world deal with people like putin who have nuclear weapons and does the brain use back propagation or not i love the contrast third one with the other two eric had another question i'm gonna go here um you spent years working on topics that mainstream machine learning community thought was niche what advice do you have for contrarians trying to produce the next alexnet result just trust your own intuitions i have this standard thing i say which is either you've got good intuitions or you haven't if you haven't got good intuitions it doesn't matter what you do if you have a good intuition you should trust them but of course that needs to be um padded out with where do intuitions come from and good intuitions come from a lot of hard work trying to understand things and basically i think we're based analogy machines so lots of experience with similar things is where intuitions come from and so you just need a lot of experience and then trust your intuitions next one comes from danielle newnham what is the connection as you see it between mania and genius ah that's very interesting i'm slightly manic depressive so i tend to oscillate between um having very creative periods when i'm not very self-critical and having mildly depressed periods when i'm extremely self-critical and i think that's more efficient than just being kind of uniform so what happens in manic periods is you just ignore all the problems you're so sure there's something exciting here that yeah sure there's all those obvious problems but don't let those stand in our way let's get on with it um and then when you're oppressed all these obvious problems overwhelm you and um the question is can you keep going and sort them out and figure out whether the idea really was good or not and i tend to sort of alternate like that which is why every so often i tell people i figured out how the brain works and then i go through a long period of figuring out why that isn't actually true um which is slightly depressing i think it's just got to be like that um there's a there's a poem by william blake um that has it's ah it has a pair of lines in it that go giant woven fine a clothing for the soul divine and it's basically saying that's just the nature of being that joy and woven together and i think that's the nature of research too and if you don't get really excited and you don't get really fed up when it doesn't work um you're not a real researcher well maybe it's just a different kind of research there's a related question as part of that what childhood experiences shaped you the most and how i think the most formative experience was coming from a home in which um everybody was clear that religion was nonsense and being sent to a private school [Music] which was a christian school where when i was seven when i first went there everybody believed in god everybody except me that was um that was a very formative experience for me um possibly because i got a large ego i realized that everybody was wrong but having that experience of seeing everybody else being wrong and gradually over the years seeing them change their minds and seeing these teenage boys say well maybe maybe god isn't real um that was very helpful the next one is from bishal binayak what's your thought process to solve a research problem is it mainly focusing on machine learning probably implying of course you know the question is do you also need to think about other fields to do what you're doing because we don't we don't necessarily have good insights into our own thought processes um but i i guess i tend to work a lot with analogies so at least i'm consistent that is i i think the basic form of human human reasoning is analogies which are based on having the right features in big vectors and um that's how i do research too i i try and look for similar things and maybe it's not so much try as similar things sort of pop into my mind um and i i think everything i'm doing is a kind of result of these analogies with many many other things via these feature vectors where i'm sort of basically unaware of many of these knowledges but they're different that's not very helpful but i i don't really know the following question here also from bishal is what's the next big thing on ai and advice for phd students to which area to focus on i think a next big thing i don't think that's the next big thing a next big thing is going to be a convincing learning algorithm for spiking neural nets that is able to deal with both the discrete decision about whether to spike or not and the continuous decision about exactly when to spike and that makes use of spike timing to do interesting computations that would be much more difficult to do in non-spiking neural nets that would be my bet about um one of the big things but the other thing and what the reason the deep learning revolution is going to keep going is that actually if you just make a bigger one you don't need any new ideas you already get things working better um it's slightly depressing if your trade is new ideas but if your trade is how do you build hardware to make a bigger one then it's great the next one is from thinkorswim what is professor hinton's regret in research choices so far that is something he wished he had delved into but chose not to and now perhaps a regret looking back time's short so i'll just say a learning algorithm for spiking neural nets you wish you'd already done it but now you can still do it in the next year right yeah maybe jordan hisstroth has the following question how important is embodiment for intelligence given the recent dali results from openai and i'll say i'm personally really curious about that too working on a lot of embodied intelligence myself so i think one needs to distinguish the engineering version of this question from the philosophical question this question so the philosophical version is could a being sitting in a room listening to a radio and watching a television figure out how the world worked even if it couldn't actually move anything it just gets these sensory inputs and that's a philosophical question i think it could the engineering question is is that a good approach just to listen to the radio and watch television and i think the answer is definitely no if you want to do perception for example as soon as you put one or two cameras on a robot and let the robot move around in the world you get a very different view of what the questions are and how to solve them than if your idea of doing perception is just to take a database of images like imagenet um because you have the option of changing viewpoint and seeing how things move as you change viewpoint um you have a task to do um you you have to be able to ignore things that aren't relevant you really would like to have a fovea so you can see fine detail without swamping yourself all the time it completely changes how you build your perceptual system um so philosophically you don't need to be embodied but actually as soon as you're embodied in a sensible way it changes how you're going to do [Music] things so for engineering embodiment's important however there's a lot of hassle comes with embodiment right you have to deal with the body um so i think we can still make lots of progress on um databases of just videos where um i guess there was somebody making the video but basically you're you're just taking the video as data there's lots of room for working like that without having a mobile robot where you don't control the data collection but but i've been a long time ago dana ballard um but probably back in the 80s dana ballard realized that um animate perception when you've got a robot moving around is just going to be have a very different flavor from standard computer vision and i think it was completely right next one is from ranjit ravindran why do you do what you do do you believe it would make the world a better place or are you just having fun exploring the limits of human creativity much more the second one i'm afraid um so i really want to understand how the brain works and i believe that to understand it we need some new ideas like for example a learning algorithm for spiking neural nets do you think it's um there's a follow-up question of my own do you think it's almost necessary to be really driven by by the kind of exploratory aspects or is it possibility just as productive in research if if you care more about the bottom line effect on the world is it just a different style i think if you want to do fundamental research you have has to be curiosity driven you're going to do your best research when it's curiosity driven you're going to be motivated to sort of ignore all the apparent barriers and pretend they're not there and see where you get um whereas if it's for the bottom line i just don't think you're going to be as creative so i think the sort of the very best research gets done by graduate students in good groups with plenty of resources so you need to be young and driven and really be interested in something next one is from peter chen actually my co-founder and ceo at coverin you know him um he has a research organization question um you've been a i mean you've been doing pure academic basic research at the university you've done industry basic research at google brain and you've also seen industry applied research while at google as well as at some you know some people you know who are involved in in startups and so forth um how do you think of these different places as providing maybe different opportunities to to make research go forward but also to from their build products to be honest i don't think that much about building products products are nice they pay the bills and companies would like to have products um it's not what i really care about what i really care about is how do you make big learning systems and how does the brain work and the nice thing about the brain team at google is they have the resources to explore big systems and lots of smart people to discuss things with um and maybe i should care more about products but i think it's i believe in specialization and so having everybody care about products is not necessarily the right mix the next batch of questions is all centered around the brain so i'm gonna give you all the questions in in one go jeff so and then you can see you know what perspective you want to give on this this whole thing the first one from lucas bayer is how does the brain work then tim detmers what's your take on mixed learning algorithms back prop in cell body dendrites plus feedback alignment across neurons could such algorithms be both biologically plausible and competitive with pure back prop or is a single general algorithm more likely to exist prasad kothari is wondering about spiking neural networks cedric vandalar it seems you have drawn inspiration from the human brain in the past do you think there are certain techniques that will eventually turn out to be crucial for example spiking neural networks atom commander jeff recently declared that he finally didn't think the brain was doing back propagation but it might be doing something akin to the boltzmann machines does he see this kind of architecture come back as a viable ai model or as a theoretical model for how the brain works and then the last one by yigid is about the end ground hypothesis so it's a lot of related questions here there's so one set of issues which is if the brain is going to do something like backdrop how does it get gradient information to go backwards through the layers and that's what the ingredient hypothesis is about and it's the idea of using um changes in neural activity to represent area derivatives so using temporal derivatives for error directions i don't really believe in that anymore um so let me go to the question about boltzmann machines and do i believe in bolstering machines i i whack some wayne on botswana machines because they're such a neat idea um but right now i believe in part of that but not the main thing so balsa machines um had these markov chains which required symmetric weights um which are implausible um but there's another aspect of lots of machines that i mentioned in the podcast which is that they use contrastive learning so a boss machine is more like a gan than it is like typical unsupervised contrastive learning in in unsupervised contrastive learning you take a pair of crops of the same image and say make their representation similar and a pair of crops from two different images and say make their representations not too similar um [Music] in a balsa machine you take positive data and say have low energy for the positive data and you take negative data you say have high energy for the negative data so but the data is just an image it's not a pair of images or anything it's just an image now um and i believe in that now so i think that if we're going to get unsupervised contrastive learning working what we need is to have two phases like in a boxing machine we need to have a phase when you try to find structure in positive data but not in pairs of crops or anything but in the whole positive the whole image you're looking around for essentially agreements between locally extracted things and contextually predictive things and then we need a different phase in which i show you negative images things like real images but that aren't real they're slightly different and what you're concerned with is that the structure you found in the real images shouldn't be in these negative images so you want to find things that are in the positive data and not in the negative data and that's how you protect yourself from finding structure inside your neural network that's caused by the wiring at the front end of the neural network anything caused by the wiring will cause the same structure for positive images and negative images um and so you can filter it out that way so there's an aspect of boltzmann machines i really believe in which is you have to use positive and negative data to protect yourself from just learning about your own wiring but the idea of a markov chain to generate the negative data i think is just too cumbersome i think we need otherwise a generating negative data and this is quite like gans right so in gans you've got real data and you've got data generated by generative model and that's the negative data and if you compare what i believe now with gans what i believe is that the discriminator which is trying to decide is this real or negative data by finding structure that should only be there if it's real data that's the sort of main thing and i want to use the internal representations of the discriminator as a generative model in order to get the negative examples for training the discriminator so what i'm doing is a kind of what i believe in now is sort of cross between gans and bolster machines but in gans it's not a mark of chains the geometry model is just a causal generation model a directory geometry model which is much easier and i think probably you have a discriminator and then a direct degenerative model that's learned at the same time for the negative examples in principle there's unification because gans can be rewritten as energy based models also just a specific form of them but the thing about gans is you generate from random stuff at the top and it's hard to get coverage there might be all sorts of things you never generate and you wouldn't know if your discriminator you go to the top level your discriminator and then you regenerate from the top level of the discriminator you'll get coverage so in a paper with the wake sleep algorithm that i published in 2006 with simon of cinderella and uita in your computation we have something that doesn't use backprop it manages to learn well without backprop it uses contrastive wake sleep and the contrastive aspect is that you do recognition that's the sort of weight phase and then you generate but what you generate from is not random stuff but a perturbation of what you got when you did recognition and that gives you coverage so that's i think there's maybe a unification coming on those lens that seems a very concrete idea right for for execution and could give some amazing results it's actually running on my computer right now it's running on my oh you're running it right now i got it and then the other batch of questions related to the brain was of course on on spiking um the role of spiking well i think it's very important i think um very early on in neural nets um minsky and papert they hit on the xor as the thing that a neuron couldn't do right they couldn't tell whether two inputs were different it's an exactly equivalent problem to to solve the same function you can't tell whether two inputs are the same obviously if you could do what it could do um it's unfortunate that they went for example rather than same because if you go for same and say well our artificial neurons can't tell the two inputs are the same you're immediately drawn to the idea that well if you use spike timing you can tell where the two spikes arrived at the same time because then they push a lot of charge into the neuron at the same time and we'll put it above threshold particularly if the excitatory inputs followed by some inhibitory input so they have to arrive in a narrow window so spiking neural networks are very good at detecting agreement but our normal neural networks need several layers to do that and if we could just get a good learning algorithm i think we would discover that they learn to make use of that ability just like they learn to make use of it really well for doing auditory localization when i think about the transformer architectures they're they're also kind of designed to define agreements or correlations just much more i guess much bigger piece of machinery than than maybe a spiking architecture but it seems like there could be some connections there i mean they've been neuroscientists saying for years and years that it'd be crazy not to use the spike tones and there's people like abilis who talked about sinful chains um it would be very satisfying to find a learning algorithm for these things and show that when you start learning particularly on sequential data like auditory data um then they really do make use of the spike times in a sensible way and then you could use these spiking cameras so spiking cameras are very clever things that give you lots of information but nobody knows how to use it same with the auditory domain people like dick lan have been saying for years we should be using spiky neural nets to represent auditory input but nobody knows how to then take that representation and learn on it and do things with it this is a follow-up question of my own but if i think about spiking and let's say i try to play devil's advocate here and try to maybe argue against a strong believing in spiking i might i might maybe say something along the lines of um well maybe the reason we have spiking in human brains is because maybe evolutionary it was easier to somehow evolve or due to random luck of the draw we evolved spikes but i mean we didn't devolve wheels and and wheels are maybe more effective at you know transportation even though we didn't manage to involve them no we didn't yeah you have wheels you just need to think straight so you have to go over rough ground right and so you need a wheel with a six foot diameter and that's going to be a lot of rim okay so as soon as you know about time sharing you decide well here's what i'm gonna do i'm gonna have a wheel with a six foot diameter but i'm actually only going to have two little bits of the rim and i'm going to alternate between using these two bits of the rim and i'm going to use it as a wheel so i'm going to rotate about the hip which is going to be a very low energy way of walking and then i'm suddenly going to switch because i have to go go backwards i have to fly back he's going to go all the way around i'm going to do a flyback and then i'm going to use the other leg the other bit of room and there's one other big difference which is a normal wheel the axle is suspended from the top of the wheel and there's there's pressure in the side of the wheel to hold it up so the spokes are in tension um you have to have something like the rubber tire for rough ground so what you have is you instead of a spoke this intention you have a spoke that's in compression you just have one of them for each bit of room but it can bend in the middle and that means you don't need tires because you can absorb a lot that way and you don't have too much unsprung weight because you only got a bit of the rim then but it's basically a wheel it's just a time shed wheel now there is one other little advantage the time shed wheel has which is you don't have a problem in getting nutrients in because it doesn't go all the way around it just goes forwards and backwards so you can have blood vessels going into it more easily but it mechanically that's just an energy supply problem mechanically it really is a wheel you're using it just like a wheel a little bit of room and you're rolling like a wheel does so you use very little energy and then you quickly substitute one piece of room for the other i'm surprised you didn't know we had wheels so maybe my bad analogy aside jeff do you think there's any possibility that just biking was easier to evolve and that's why we ended up with no i think there's a very good reason for using it but i don't know what it is i think it's to do with coincidence protection next the next thing you'll be saying is that when we make flying machines we don't give them feathers well i wasn't gonna go there after you are so eloquently uh told me i have a wheel that's what's wrong with drones right if you have a drone and the blade hits something either it breaks a thing or it breaks the blade if the blade was made of little bits of velcro that zipped together when it hit something it could break and then the drone would land and it'd do a bit of preening and zip the velcro back together again and it could fly off again so really um we ought to make drones with feathers instead of rigid blades so that they could hit things without damaging them and without damaging themselves um and have something that would preen the feathers to get them back together and off again so those are the two classic examples people don't have wheels and airplanes don't have sevens well they're both wrong drones don't have feathers yet but i think they will yeah that'll be interesting to see when when that happens so the next couple of questions are again quite really this i'm going to ask him in batch abdullah hamdi asked what's the next paradigm shift in ai after deep learning a jd vacaron does the current deep learning paradigm suffice for transfer learning ally humans or does it need to be fundamentally enhanced and arun rao what are the next milestones for deep learning going from existing foundation models to a long-term goal of agi and how does hinton define agi i try and avoid defining here and i try and avoid working on agi because i think agi there's all sorts of things wrong with the vision of agi it it envisions an intelligent human-like android um that's as smart as us and i don't think intelligence is necessarily going to develop like that um i think i'm hoping it develops more symbiotically it's it's very individualistic um and we developed in communities we developed so this goes back to your what you said in the podcast about ants and so on i think intelligence develops in societies better than it does individualistically and i think maybe we'll get smart computers but they won't be autonomous in the same way they may have to be if they're for killing other people um but hopefully that's not where we're going yeah the earlier part was more kind of about the next transition what's next after deep learning i mean that's the question i'm not trying to apply there is something next that's the question right so what i believe is this that we won't we're going to stay with the very successful paradigm of tuning a lot of real valued parameters based on the gradient of some objective function i think we'll stay with that but we may may well not be using back propagation to get the gradient and the objective functions may be far more local and distributed that's where i think next question is from dystopia robotics are you familiar with rich sutton's the bitter lesson and oh yes what are your thoughts on it i sort of have it in my lectures um that the deep learning depends on two things it depends on doing stochastic gradient descent in big networks that have a lot of data and a lot of compute power and then on top of that there's a few ideas that make it work a little bit better things like dropout and all the stuff we've worked on we'll make it work a little bit better but the crucial thing is um lots of compute power lots of data and stochastic gradient descent um and i agree with it next question is from prabhav khala how do you read research papers how to get past the mathematics and get a taste of the core message okay i don't read many research papers i basically get my colleagues and my students to explain them to me i'm hopeless at mathematics i can do it when i have to to justify something i've already sought out like we've bought some machines i figured out how they would work and then did the math to show that's right thing to do um but i'm not very good at math and i always find it a big barrier reading papers to understand all the notation um and i find it much easier if i get um so for neuroscience i get terry sinovsky to explain it to me and for computer science i get my grad students to explain it to me a very related question to what you just answered jeff from chaitanya joshi many people have shared anecdotes on how professor hinton's mind works in a analogical and intuitive manner with an aversion to mathematics and proofs could prof hinton elaborate on the roles of formalism versus intuition when going about research i think there's room for more than one kind of person so i sort of hate formalism i love intuition i love tinkering about on my mac to see what works and what doesn't um i think it's very important to have foundational work um and to really understand the mathematical foundations of things and it's not what i do it's good to have proofs it's not what i do um i think i have a little test i give people suppose there were two talks on at nips at the same time one and you had to decide which one to go to one talk was about a really clever and elegant way a new totally new way of proving a known result and the other talk was about a new learning algorithm that seems to do amazing things but nobody understands why now i know which talk i'd go to um and i i know that the it was easier to get the first paper accepted than the second one but other people would really like to know new ways of proving things because that's what i think is really interesting i'm not like manageable and i actually think nearly all the progress in neural nets has not come from um doing the math right it's come come from intuitive ideas that are later on people do the math that definitely resonates with me guillermo martinez vlar asks how do you transition from a background in psychology to the field of ai and what would you suggest to young people considering doing the same okay there's an interesting issue there so when i was teaching at the u of t um if you looked at the undergraduates a lot of computer science undergraduates who were very good but also cognitive science undergraduates who did minors in computer science but really cognitive scientists and they typically weren't quite as good at the technical stuff but they've gone on to do much better things because they had the interest in the issues they really wanted to understand how cognition worked so i'm thinking of people like blake richards and tim littlecrack who've gone on to do great things um because they knew what questions they wanted answered wanted answered whereas most of the computer scientists didn't um and for some reason i thought that was relevant to the question could you say the question again it is very relevant to the question let me tear up again how did you transition from a background in psychology to the field of ai and what would you suggest to young people considering doing the same i don't know um it's very hard to generalize from an end of one i had a very weird career where i started off doing physics and physiology in my first year university in fact i was the only student at cambridge that year doing both physics and physiology and then my math wasn't good enough for physics and i wanted to know the meaning of life so i did philosophy and developed strong antibodies and then i did psychology um but i did have some quantitative background having done physics um and physiology so retrospectively it was an interesting background it would it didn't happen with any design it just kind of happened but i think i think you need to have questions that you're driven by and not just techniques it's more important to have questions that really excite you and you do anything to find the answer than to just be very good at some technique however i wish i'd learned more math when i was young i wish i didn't find linear algebra complicated next question is from khalid saifullah how conscious do you think if at all are today's neural networks i guess you would say just a little bit and get lots of flack for saying that i have a view about consciousness um so about 100 years ago if you ask people what distinguishes living things from dead things they'd say well living things have vital force and dead things don't and she said what's vital force they'd say well it's what living things have um and then we developed biochemistry and we understood about um how biochemical processes work and since then people haven't talked about vital force it's not that we don't have vital force we still have vital force if we ever had it um it's just not a useful concept anymore because we understand in detail how things work at the biochemical level and we understand that you know the organs break down when they don't get enough oxygen and and then you're dead and then it all decays um and it's not like some vital force left the body and went to heaven it's that um the biochemistry just packed up on you um so i think the same is going to be true of consciousness i think consciousness is a pre-scientific concept um and i think that's why people are very bad at defining it and everybody disagrees and i don't have any use for it um there's there's many related concepts like are you aware of what's going on in your surroundings so if muhammad ali hits you on the chin you're not aware of what's going on in your surroundings and we use the word unconscious for that that's one meaning of unconscious but if i'm driving without thinking about what i'm doing that's another meaning of unconscious we have all these different meanings and my view of consciousness is it's a a kind of primitive attempt to understand what's to deal with what's going on in the mind by giving it a name and assuming there's some essence that explains everything and here's a similar analogy for cars if you don't understand much about cars i can tell you how cars work cars have oomph and some cars have more umph than others like one of these testers with big batteries has a lot of oomph and a little um mini doesn't have much jump especially if it's old um and that's how cars work and so somehow more often than others and obviously if you want to understand cars it's really important to understand um now as soon as you get down to understanding um you start understanding how engines work and talk and energy and how it's converted and all that stuff and as soon as you start understanding that you stop using the word um and i think it's going to be like that for consciousness of this explanation jeff farzani mirza sade asks ml once started with roots in human human psychology do you see ml advancements today having the capacity to help better understand human psychology in the future like seeing people as neural networks or classifiers and their cognitive distortions similar to under overfitting and so forth yes i do i strongly believe that i strongly believe that when we eventually understand how the brain works that's going to give us lots of psychological insight too just as understanding um chemistry at the atomic level understanding sort of how molecules bump into each other and what happens gives us lots of insight into the gas laws um the the fine level understanding is important and does give rise to understanding what's going on at high levels and i think it's going to be very hard to get satisfactory explanations of a lot of things going on higher levels things like schizophrenia for example without understanding the details of how it all works well thank you thank you jeff for for making the time for the uh the additional q a uh section with um questions from our audience wow what a way to wrap up season two thanks so much for all the great questions for jeff and thanks for listening to the podcast if you enjoy this show please consider giving us a rating and please recommend us to your friends and colleagues you
Info
Channel: The Robot Brains Podcast
Views: 74,916
Rating: undefined out of 5
Keywords: The Robot Brains Podcast, Podcast, AI, Robots, Robotics, Artificial Intelligence, Geoff Hinton, Geoffrey Hinton, Machine Learning, DeepLearning, Deep Learning, ImageNet, Google Brain, backpropogation, neural networks
Id: 4Otcau-C_Yc
Channel Id: undefined
Length: 46min 44sec (2804 seconds)
Published: Wed Jun 08 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.