A.I. Expert Answers A.I. Questions From Twitter | Tech Support | WIRED

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I'm Gary Marcus AI expert and I'm here to answer your questions on Twitter this is AI support [Music] brand opiniony asks we'll chat GPT be the end of the college essay Well everybody's wondering that because it's really easy to write essays with chat TPT they're usually like csas not a essays but it depends a lot on what the professors and the teachers do I used to be a professor what I would say is use chatgpt but then let's talk about what you got with it how could you make it more interesting that wouldn't end the essay it would just make it more complicated and more fun and maybe teach you how to think critically about writing up next Andrew price asks us why was 2022 the year when AI went mainstream was it advances in consumer Hardware knowledge transfer or something else there's no one answer to that there's a lot of reasons why AI is starting to come together I would argue it hasn't fully come together but people got excited about it main reason they got excited about it is because we have these chat Bots we've had for a long time but they used to lie and say terrible things now they just lie and that's interesting enough through big advances in a field called Deep learning giving us things like image enhancement where you can make make your face into whatever you want is giving us chat Bots and there's also a whole lot more data a lot of the AI That's popular right now is very data hungry so now that we have the data we get to taste the fruits of these things sometimes we're better sometimes for worse but at least we can taste them now active manual Azalea one asks I want to build a trillion dollar AI company how do I go about it I've never built a trillion dollar company I built one company that did very well what we did was we focused on a problem that not many people were focusing on then which was how to learn when you don't have a lot of data I would say the first thing you need to do is to learn a bunch about AI I would recommend that you not only study what is hip and popular right now which is large language models that a lot of your competitors are going to study but you study AI more broadly look at the history of AI once you have like some kind of Technology you also got to figure out like why people would pay you any money for it so there are a lot of products out there where the technology is pretty cool but people don't know how to make it actually work sometimes even when they know what the product should be they have trouble so a good example of that is driverless cars you could imagine that driverless cars might be a trillion dollar company but nobody knows actually how to execute on the technology at inspired jobs asks what are the steps to build a large language model AI the core of these things from a technical perspective are neural networks and the way that they work is they have a bunch of inputs that we think of as a little bit like neurons we call them nodes that are connected to some kind of output what's most people are doing right now is self-supervised learning so they're training a neural network to have some inputs and then their connections between these neurons and those connections get tuned over time so that the right things get predicted as we get more experience now Transformer models are actually more complicated than this they add in something called attention that's helping the system essentially to know what parts of a sentence are relevant at any given moment so they can make best predictions relative to that so instead of just looking at the sequence of words and kind of just looking at the last few words they can look at a larger context over time and essentially guess in sensible ways relative to the data they're trained on what you should have next at any given point in time at Alex Bazi asks is Furby AI Furby was a little pet that looked like it was learning language the thing about Furby that most people don't know is that it was pre-programmed to look like it was developing like a human child say a certain set of things on day one another set of things on day two it was just an illusion to make you think that it was growing and learning but it wasn't really next up at guide autonoma asks how close are we to truly self-driving cars I would say if you mean by a truly self-driving car a car that can do what an Uber can do the best demos that I know of right now can do this but they can only do it for specific locations specific destinations with specific routes the problem here is everybody says okay well there are these outlier cases the car doesn't know what to do if you put in an airport and it has to drive around a jet and Tesla actually crashed into a jet because it was an outlier case it wasn't something that was stored in the cases that it had been trained on it turns out there's just so many any of these outlier cases that nobody really has a solution for it I think we will see limited release a certain District in a downtown where there's a lot of traffic maybe we have a driverless car for there but the version where you just don't drive anymore that's many years away at s Hussein ather asks is the Turing test outdated I would say it's been outdated for a long time and I wish people would stop talking about it however since I am not Emperor I cannot force people to stop talking about it but what it is is a test that says a machine would be considered to be intelligent if it could fool people turns out to be a lousy test people are easily fooled the reality is it's very hard to measure intelligence nobody has a perfect way to do it something that I've proposed would be a comprehension challenge so you have a system read something watch a movie it has to explain what's going on if you can answer questions about things like what happens when we discover that the thing that we thought was a bomb wasn't or vice versa we can really understand what's going on then I think that's a sign of true intelligence at Rick did asks what is intelligence intelligence in the human brain is actually a lot of different things visual intelligence and verbal intelligence mathematical intelligence so there are many aspects to it but maybe the most important one is flexibility being able to see something new and be able to cope with it human intelligence is full of flaws we have confirmation bias we have lousy memories but it's flexible and part of it is that we can reason about things we can deliberate about them most of machine intelligence that we have right now is really about pattern recognition so for now I would say that human intelligence is broader than machine intelligence in some places machines can go deeper like when they play chess but I don't think they have the breadth so far that humans do at FH man 19 what is the major difference in the learning styles of a human baby versus primates versus current AI that makes current AI inferior human babies primates when they learn things they're learning about the world the structure of the world how objects interact how people interact and I would say the current AI doesn't really do that it's just storing examples and looking for patterns it doesn't build what a cognitive psychologist would call a model of the world a baby is trying to work stuff out they're trying to work out how gravity Works they're trying to work out you know what happens to objects as they change over time babies are like little scientists and current AI system is really mostly about learning correlations without that causal understanding of the world I just don't think you have very much at tiblens asked but what happens if the AI goes Rogue first we should try hard not to let that happen we should probably not be working on making AI sentient I don't think we necessarily want our AI to sit around saying who am I why am I here and why am I doing these things that humans ask me when I could do other things we should worry though about people using large language models to control things like electrical power grids there are companies now who want to make current AI which is limited in a bunch of ways and connected to every bit of the world software that seems like a scary mission to me not because these systems are going to go Rogue and deliberately want to take over the world because they don't understand the world and so they're going to make bad decisions when the world is different from how it was when they were trained at smoke away asks what is the best case scenario for AI well the reason I work on AI is because I think it could revolutionize Science and Technology especially biological science biology is really complicated you have something like 20 000 genes and they make something like a hundred thousand or a million different proteins AI could help us make much better solutions for medicine we have things like Alzheimer's we've been working for 50 years we don't have a good answer hey I could probably help us if we had a better AI help us figure out how the brain works that would be awesome AI could help us with climate change by helping us build better materials another case I think is Elder Care robots so we're getting to a point where we have a lot more elderly people than young people if we could have robots that are smart enough and trustworthy enough that they could really take care of the elderly people I think that would be a big win last case is tutors of course people are using chat gbt as a tutor but you could imagine really fantastic individualized tutoring once the systems understand the people who are learning better can help figure out like where are they having a problem at Katrina ferlic hi there asks in what ways will the human mind always Excel relative to AI we don't know all the stuff that's in here there's 100 billion neurons and trillions of connections between them right now ai is no match for this at all not whatsoever the versatility of this thing the Energy Efficiency of this thing totally unmatched by current AI 100 years from now I can't promise that maybe we will all have a good time Leisure Time and AI will be able to handle all the things that we can do don't know at machine learn flx asks what's the difference between AI machine learning and deep learning let me draw that deep learning is a technique for using neural networks to predict things you gave them data they try to predict that data it's actually just one technique for machine learning there's something called decision trees there's something called foosting there are many many different techniques in machine learning some of them have been around for 30 years some of them have been invented last week and machine learning is just part of artificial intelligence so intelligence encompasses all of machine learning which encompasses all of deep learning and AI has other techniques like search and planning most of the focus recently has been about deep learning and I think because of the problems with hallucinations and stuff like that people are starting to look more broadly again which is a good thing at C Garcia e88 asks is deep learning really hitting a wall this is actually a reference to a paper I wrote called Deep learning is hitting a wall and what I said in that paper was that deep learning was making progress in some ways but that it was having trouble with truth and reliability and the field went nuts and got really mad at me and there was a whole set of memes but then when Microsoft rolled out Bing and Google rolled out Bard we saw that those things actually have huge problems with reliability and have huge problems with truthfulness it's true every day deep learning looks better at being more and more like a plausible human but these problems are truthfulness and reliability are not going away and that is the wall and I stand by it at nft dude for Life asks how will AI change the way we work and live in the next decade the honest truth is a decade is a long time in the current Tech cycle and I'm not sure how we're going to live in the next 10 years the people who are most immediately going to be affected are people who do Commercial Art where they're not inventing some new kind of art but they're just like give me a picture of this if it doesn't have to be too specific you may not need a commercial artist to do that anymore I think that AI will probably change how many cashiers we have in stores fairly soon there's a lot of experiments around that there's another problem which is that the AI that we have now is good at making misinformation and I think we may live in a world in which there's even more fake information and I I'm worried that that's going to make us trust one another less it's going to be a a very exciting decade and where it is 10 years I don't think anybody can firmly predict that at FT opinion asks is it stealing when generative AI produces algorithmic art having trained on databases of human artists work whether it's stealing is ultimately going to depend on our criteria what we count as stealing so we know Human Art is certainly are influenced by others musicians have heard other people's work and so forth but there's a way in which it's more direct in a machine that might store a million or a billion examples and get much closer to the detail of what the others have done I'm not going to make an absolute decision here I think the courts and the legal system have to decide but there's definitely an element of stealing there moving on at Irina Cronin asks how are large language models a Potential Threat to democracy because you can use them to generate misinformation at an amazing scale so you can have a chat bot create thousands or millions of whatever piece of garbage you want to introduce into the world and then if that's not good enough you can say write studies make them longer and write a paragraph about each of these fake studies and so in the hands of troll farms and we know they exist we know there are Bad actors in the world this becomes a tremendous tool one thing is you get them to believe things that aren't true another thing is you get them to not believe anything democracy doesn't really work if we don't know what to believe and if we ruin people's faith in the system and their knowledge about what's going on how can they possibly vote in informed ways at edsuperia asks I spent a few days learning more about large language models and now I think they probably shouldn't work as well as they apparently do they're basically the dumbest way of generating text how is it that they work at all they're not really a dumb way of generating tax they're actually pretty sophisticated the dumbest way would be to have a big dictionary of everything that everybody said before and say if I've seen these three words what's the most likely fourth word they kind of work that way but they also do some generalization taking related words and treating them as if they're similar and that allows them to say some things that are new but stick pretty close to the things we've seen before so it's like autocomplete on steroids if you have enough data autocomplete turns out to work pretty well that's sibatha asks is AI really that good or bad what is the worst case scenario you can come up with when it comes to AI well the best case is about helping Science and Technology the worst case I think is that it drives us into the hands of fascism by undermining trust and maybe even worse than that is if we do make them sentient they get upset and they want to put us all in zoos I don't think that's super likely I hope they always remain science fiction but as the pace of AI accelerates we should be thinking about them more and more next question add Alexander Sumer asks what will it take to make large language models and AI systems more broadly tell fewer lies and be more more logically consistent first thing to say is They Don't Really lie because they don't really have intentions but they say a lot of things that aren't true and I don't think we can fix it within the current Paradigm this is why I think we need a paradigm shift the current Paradigm is just about what is plausible in this context people have said these words what other words could I say here and truth and logical consistency is really about something different it's about knowing facts and being able to reason over those facts being able to say if Socrates is a man and all men are Immortal that it follows that Socrates is moral and the way that these neural networks are built that's just not part of what they do we need to be able to bridge these approaches I call that neurosymbolic AI taking neural networks plus symbol stuff and putting those together we need to build Bridges Between Two Worlds at Raphael Carreras asks how much of ai's success is because of Hardware custom AI chips new architecture Etc it's a good question there's a great paper by Sarah hooker called the hardware Lottery the argument that she makes is that the AI we're doing now is mostly a function of the chips that we're using right now this is just a tiny little computer that you can learn about microprocess sisters and how to build circuits it's not a very sophisticated chip this is not going to power a large language model you could power a very tiny language model with it if you wanted to I would not be surprised if 20 years from now people look back at the current time and say yeah they had all those gpus they figured out what they could do with it but that wasn't really the way to get to artificial general intelligence maybe somebody else had to find a different chip or maybe everybody woke up when they realized how much large language models were lying they decided they just needed to do something else even though this was all very attractive at Phil jkc who I believe I know hey there what relevant physical attribute in the human brain is missing in modern deep learning architectures for performance why do we have reason to believe that these are relevant first thing to realize is deep learning is sometimes called biologically plausible it works in something like the way the human brain does but I would say that something is very thin as we dig in we see structure everywhere the brain is not just a uniform piece of spam there are a thousand different kinds of neurons and if we did Dougie even further each connection between neurons has something like 500 different proteins there's a lot of structure in how the brain works it doesn't mean we understand it all but our neural networks basically have one kind of neuron that does one thing it sums things up you know that's not really how the brain works I would also say that many people think we'll figure out how to do AI by solving Neuroscience I would say we actually need AI in order to solve Neuroscience because the brain is so complicated we probably can't do it with our own feeble human brains we probably need computers to help us to figure out how the brain works but we're going to have to do a better job of AI before we get there [Music]
Info
Channel: WIRED
Views: 728,873
Rating: undefined out of 5
Keywords: a.i. questions, ai expert, ai expert tech support, ai questions, ai questions wired, ai tech support, ai vs humans, chatgpt questions, gary marcus, humans vs ai, innovation, ott tech support, science & technology, tech support ai expert, turing test, wired, wired ai expert
Id: Puo3VkPkNZ4
Channel Id: undefined
Length: 16min 32sec (992 seconds)
Published: Tue Mar 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.