Jim Keller: Most People Don't Think Simple Enough | AI Podcast Clips

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so what have you learned about the human abstractions from individual functional human units to the broader organization what does it take to create something special well most people don't think simple enough all right so do you know the difference between a recipe and understanding there's probably a philosophical description of this so imagine you're gonna make a loaf of bread yeah the recipe says get some flour add some water add some yeast mix it up let it rise put it in a pan put it in the oven it's a recipe right understanding bread you can understand biology supply chains you know grain grinders yeast physics you know thermodynamics like there's so many levels of understanding there and then when people build and design things they frequently are executing some stack of recipes right and the problem with that is the recipes all have a limited scope look if you have a really good recipe book for making bread it won't tell you anything about how to make an omelet right right but if you have a deep understanding of cooking right then bread omelets you know sandwich you know there's there's a different you know way of viewing everything and most people when you get to be an expert at something you know you're you're hoping to achieve deeper understanding not just a large set of recipes to go execute and it's interesting the walk groups of people because executing reps apiece is unbelievably efficient if it's what you want to do if it's not what you want to do you're really stuck and and that difference is crucial and ever and everybody has a balance of let's say deeper understanding recipes and some people are really good at recognizing when the problem is to understand something DP deeply that make sense it totally makes sense does it every stage of development deep on understanding the team needed oh this goes back to the art versus science question sure if you constantly unpacked everything for deeper understanding you never get anything done right and if you don't unpack understanding when you need to you'll do the wrong thing and then at every juncture like human beings are these really weird things because everything you tell them has a million possible outputs all right and then they all interact in a hilarious way and then having some intuition about what do you tell them what do you do when do you intervene when do you thought it's it's complicated all right so it's you know essentially computationally unsolvable yeah it's an intractable problem sure humans are a mess but with deep understanding do you mean also sort of fundamental questions of things like what is a computer or why like think the why question is why are we even building this like of purpose or do you mean more like going towards the fundamental limits of physics sort of really getting into the core of the sighs well in terms of building a computer think simple think a little simpler so common practice is you build a computer and then when somebody says I want to make a 10% faster you'll go in and say all right I need to make this buffer bigger and maybe I'll add an ad unit or you know I have this thing that's three instructions wide I'm going to make it four instructions wide and what you see is each piece gets incrementally more complicated right and then at some point you hit this limit like adding another feature or buffer doesn't seem to make it any faster and then people say well that's because it's a fundamental limit and then somebody else to look at it and say well actually the way you divided the problem up and the way that different features are interacting is limiting you and it has to be rethought rewritten right so then you refactor it and rewrite it and what people commonly find is the rewrite is not only faster but half is complicated from scratch yes so how often in your career just have you seen as needed maybe more generally to just throw the whole out everything out this is where I'm on one end of it every three to five years which end are you on like rewrite more often right and three to five years is so if you want to really make a lot of progress on computer architecture every five years you should do one from scratch so where does the x86 64 Center come in or what how often do you I wrote the I was the co-author that's back in 98 that's 20 years ago yeah so that's still around the instruction set itself has been extended quite a few times yes and instruction sets are less interesting and the implementation underneath there's been on x86 architecture Intel's designed a few aims is designed a few very different architectures and I don't want to go into too much of the detail about how often but it's there's a tendency to rewrite it every you know 10 years and it really should be every 5 so you're saying you're an outlier in that sense in really more often we write more often well in here isn't that scary yeah of course well scary - who - everybody involved because like you said repeating the recipe is efficient companies want to make money well no in the individual juniors want to succeed so you want to incrementally improve increase the buffer from three to four well we get into the diminishing return curves I think Steve Jobs said this right so every you have a project and you start here and it goes up and they have Domitian return and to get to the next level you have to do a new one in the initial starting point will be lower than the old optimization point but it'll get higher so now you have two kinds of here short-term disaster and long-term disaster and your your ops right like you know people with a quarter by quarter business objective are terrified about changing everything yeah and people who are trying to run a business or build a computer for a long-term objective know that the short-term limitations block them from the long-term success so if you look at leaders of companies that had really good long-term success every time they saw that they had to redo something they did and so somebody has to speak up or you do multiple projects in parallel like you optimize the old one what you've built a new one and but the marketing guys they're always like make promise me that the new computer is faster on every single thing and the computer architect says well the new computer will be faster on the average but there's a distribution or results in performance and you'll have some outliers that are slower and that's very hard because they have one customer cares about that one you
Info
Channel: Lex Fridman
Views: 374,095
Rating: undefined out of 5
Keywords: jim keller, artificial intelligence, ai, ai podcast, artificial intelligence podcast, lex clips, lex fridman, lex podcast, lex mit, lex ai, mit ai, ai podcast clips, ai clips
Id: 1CSeY10zbqo
Channel Id: undefined
Length: 7min 45sec (465 seconds)
Published: Mon Feb 10 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.