Sam Altman (CEO, OpenAI) Talks About Artificial Intelligence at Big Compute 20 Tech Conference

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
  • Original Title: Sam Altman (CEO, OpenAI) Talks About Artificial Intelligence at Big Compute 20 Tech Conference
  • Author: Rescale, Inc.
  • Description: SamAltman #AI #ArtificialIntelligence Sam Altman spoke at the inaugural Big Compute 20 tech conference at SFJazz in downtown San Francisco on February ...
  • Youtube URL: https://www.youtube.com/watch?v=0TRtSk-ufu0
👍︎︎ 1 👤︎︎ u/aivideos 📅︎︎ Mar 11 2020 🗫︎ replies
Captions
ladies and gentlemen please welcome to the stage with moderator Shawn Hanson from rescale sam altman CEO opene I [Applause] [Music] [Applause] [Music] welcome all of you thank you for joining us today at the first inaugural big compute conference held in the beautiful SF Jazz in downtown San Francisco we're gonna be talking about a lot of topics today but the general theme today is the world has seen big data but now we're seeing the advent a big compute and when I talk about how it's changing the world I hear with this a Sam Altman CEO that opening I thank you very much Sam let's kick this off with a comment that you recently made at an address at Google where a participant asked you what's more valuable big data or open source machine learning algorithms and you said I think I said the following i used to ask companies how they're gonna get a lot of data now I asked them how they're gonna get a lot of compute you talked a little bit about what you meant by that so I certainly think data is still important it's still valuable but it turns out that the most sophisticated I think have been more about massive compute than massive data it turns out there's lots of data available on the internet there's also this sort of in some cases this e equals mc-squared equivalence between a lot of compute and a lot of computing data because you can use a lot of compute to generate a lot of data so if you think about one of open areas results from last year with dota 2 we'd be the best team in the world with no data whatsoever the entire thing was the agents self playing each other exploring the environment trying what worked stopping what didn't and sort of good arielle algorithms to do that and there's other cases like that as well where we need very little data we have a lot of computer run a lot of simulations now for a lot of businesses of course it is important to have data but for the field in general I suspect data will be the least important of those three categories can you talk a little bit about the big opportunities you see emerging because of the advent of big compute artificial udders is when we talk about that but maybe other opportunities you see honestly I have such incredible blinders on that I care about that more than everything else put together and there probably are lots of other opportunities but that's the one I think about 16 hours a day so let's talk about that a little more yeah there's a couple of milestones you posted on open AI one of them was the data - yeah when to human dexterity with rubik's cube and then another was they was learning with a recent kind of literacy of this small t-post nationally we talked a little bit about some of these exciting milestones you see happening with open AI yeah there's so many maybe will focus on language because I think that's the most that's gonna be the most applicable to sort of all businesses I think one of the most exciting developments in the field in the last few years has been how good natural language is getting how good AI for natural language is getting I think we are going to see an explosion the next few years of systems that can really process understand interact with generate language and it will I think it'll be the first way that people really feel powerful AI because you'll be able to interact with the systems like you do by talking to somebody else you'll be able to have dialogue that actually makes sense you'll be able to process and computers will be able to process huge volumes of text that are sort of very unstructured and you as interact with that system in whatever way you want in whatever way you do will get what you want you talked a little bit about the compounding effects of the mundane we've got short term mundane effects we have long term fantastic things were they are yeah this idea of kind of feeding stuff into the smart speaker systems or digital assistants do you see anything happening with the ayah that my mom watching today might be able to understand that she might feel compounding in her life I think one big one is speech recognition most people remember how bad it was five years ago and people that use Syria whatever have noticed that it gets a little bit better every year actually on a little a lot better every year and now it basically doesn't mess up even in difficult environments or it doesn't mess up appreciably more than humans mess up when they're trying to understand so I think that's one that people that seems to resonate with a lot of people because it's in its in recent memory let someone comes to your website to open AI and they're trying to get a sense of what you actually do I think my mother would have a hard time understanding what that is but I wonder to talk a little bit about how what you're doing affects the average common everyday person well when you're working a very long time horizon I think there have been three great technological revolutions so far in human history the agriculture Revolution the Industrial Revolution the computer revolution I think we are now in the early innings of the AI revolution and I expect that one to be bigger than all three previous ones put together um thinking understanding intelligence like that really is what makes humans humans much more than our ability to get physical stuff done in the world and so I think this is going to be a huge deal and impact life in a lot of ways I think I've been very inspired as many others have by the example of Xerox PARC and the technology they created was an enablement for the computer revolution in a lot of ways Alan Kay who is one of my research heroes used to sort of berate me nicely for saying these companies never think big enough it's your ox Park we created I think the number was liked by his count 30 trillion dollars the value who's thinking on that scale and my hope is that the work we do enables 300 trillion dollars of value for other companies and you know we'll also hope to capture some of that value ourselves but but we really want to figure out this grand challenge of how does intelligence work and how can we make that available to people let's talk a little bit more about that revolution revolution can be scary you mentioned in a recent chat with a CTO of Microsoft about the book called pandemonium and on people from the Industrial Revolution were quoted in first-hand accounts about machines are gonna take over the world and we're all gonna die and these problems are going to happen but you've been a very optimistic voice in the middle of all that well they all said that back in the you know hundreds of years ago now and we're still here we're still very busy we still have work to do um I think it is both true that this time is different and also everything has happened before and everything has happened again this will be different Intel general intelligence is a powerful thing I also believe that as hard as it was the time of the Industrial Revolution to imagine the jobs of computer programmers working with big computer it's hard for us to sit here and think about what the jobs on the other side of this will be but human demand desire its creativity seems pretty limitless and I think we will find new things to do betting against ax has always been a mistake can look a little bit more about the revolution that you see happening you talk about it meant multiple talks about how a GI is a kind of a stupendous and difficult thing people talk about and imagine I think it's very hard to think about what the world definitively looks like when in Tet when computers are more intelligent in some ways than humans or when computers can do most work that humans can so the only prediction I can make with confidence is that things will be very different and anyone I think who says we're gonna keep everything the same is lying but although change is inevitable we can work really hard to make sure the future although its guarantee be different guaranteed to be different is better in that same address with Microsoft you talk about the computers that we're going to see in the next five years are going to be mind-blowing if we take say view a six and GPGPU stuff off the table and even the perennial question of whether quantum computing is here the next five years do you do you see anything else that you see mind-blowing within the next the next year time horizon so I think it is true that Moore's Law is slowing down people have all kinds of ideas about things they're going to do to keep it going maybe they work maybe they don't but the version of it that is important for AI which is for AI specifically how big can we make our biggest models however we get there you know plugging a bunch of computers together optical interconnects whatever it takes to be able to sort of train these massive models um that has been growing about 8x per year for about eight years now and I think it's gonna keep going like that for about five years so again at this point I have this like narrow focus on this one thing that's really important to me there's probably a lot of other things that are gonna happen for compute but the question is are we gonna have bigger and bigger computers to train you know on that works out and the answer is yes and that's super exciting there's a there's a big gap between the haves and the have-nots and it's getting wider especially with the advent of big compute the people that have access to this infinite amount of compute as it is shrinking to a smaller number of people what do you see happening in this this trend as you see startups wanting to get involved here but they're they really have access to this and how does it affect advent of this I I think it is a huge problem there will be a handful of companies like open I'd it can train the largest models but once they're trained they're not as expensive to run not even close obviously and so I think what's gonna have to happen is there's a few people that can train the large models and we figure out how to share them with other people who can't train them and I think that's how you resolve to haves and have-nots issue I have a favorite author that talks about history standing at a hinge point and implying that major milestones and events in history make a huge difference when you're standing it you don't realize at the time you stand back looking at that yeah a lot of people like to ask people like you to look back and predict the future but if you were to look back 20 years from now is there anything that you think we could characterize as a hinge point in that way I think there are very rare occasional decisions where a small group of people tired at 3:00 in the morning in a conference room make a 55/45 call and it has a massive influence on the outcome of history but those are extremely rare most of the time it's squiggling around a curve that's gonna go up into the right and sort of unfold this technological future and you know you get it a little bit wrong and get a little bit right but eventually like progress continues so in terms of those like few incredibly consequential decisions I mean there are they're like everyone has got a story they love the one I love is the Russian military officer that decided not to push the button to launch a nuclear bomb when he thought that they were likely under attack and it turned out to be an instrument error and that one decision by one individual when he really all of his training and policies are he should have pushed the button to launch that's like a case where the world could have really gone the other way and to be perfectly honest I think a decision like that is bigger than all of in terms of one hinged decision any single decision tech companies usually make one of the most intriguing things about open AI is I think the mission that you posted online and specifically talked about making open ai safe and beneficial for humanity I think a lot of these hinge points are often tied as someone's moral compass and what what guides you there's a lot in the news about recent senators one in particular who votes based on being guided by moral principles or religion in a world where there's a lot of bad actors or even just playing on immoral actors how do you how do you help people stand at that place that could be a hinge point potentially I mean I think no we have like a set of principles we tried to write up in our charter what those are and we'd like the public to hold us accountable to them I think people can disagree with this charger though as the stakes get higher and higher no one organization and certainly no one person should be making decisions for what society what the new social contract looks like and how this technology gets used in sort of how we share governance and economics I think a thing that we will move to in the coming years or decade is more and more of our decisions we'll be influenced by an advisory board they will need to put in place of people that can kind of represent different groups in the world which right now we don't have I think as we get closer to something that feels like a GI none of us deserve that right won't want that responsibility to single-handedly determine how we're gonna live a year ago you mentioned some of the world's hardest problems we would be best solved by artificial intelligence and I think you're brought up climate change is an example of that can you talk a little bit about the class of hard problems you think would be best solved by going after AI first well I think there are a lot but I would like to talk about the class of problems that I think aren't because I think one thing that the technology industry gets wrong and I myself am often guilty of this is believing that the technology solves all problems I think there's first of all I think technology creates a lot of problems too I believe it's net good for the world but there's a balance sheet uh and second of all I think there's a huge set of problems around public policy and people's optimism for the future and the kind of betrayal they feel by the country or sort of the American Dream that didn't that kind of someone got taken away that most people in Silicon Valley say well we just need better technology and we'll solve that you know we just need a I always saw that and AI will help and better technology will help but I think these are policy and governance and leadership issues that it is a mistake for the industry to say we know better than everybody else about how to solve it do some examples of that you think that we should focus on I won like very basic one I would say is that the current generation of young people is the first generation if you believe the polling which who knows in American history to not think their lives are going to be better than their parents so this has worked for 244 years or 40 and now it doesn't and I think starting 34 years and I think starting with why it doesn't it's a great question to start with how do you see artificial intelligence helping with some of these problems for example this may be this is back to the theme of predicting the future it's unfortunate but there's a recode interview you gave where you said we will Humanity will at some point build artificial intelligence that surpasses human intelligence how do you feel like that affects some of these kind of underlying policy issues well I think I said or what what I meant to say is we are guaranteed to eventually do that if we don't destroy ourselves first which is possible you know I think the world was in an unstable place but given enough time I think biological intelligence should always end up creating digital intelligence which is likely to be superior in many ways whether or not we ever create digital consciousnesses I think up for debate but digital intelligence given enough time is for sure and that I think that just makes everything super different I think humans are really good at a lot of things but computers and AI turn out to be really good at a lot of things as well and my my most optimistic hope for the future is that humans in AI are some sort of hybrid merged human and add together it's just sort of far more capable than than either on their own let's talk a little bit about barriers to adoption there are those online that have asked questions advanced about do you see the AI winter coming or what are the barriers that are happening and you talk a little bit about the technical barriers to progress that are I just mean the barriers of progress that are not as technical in nature yeah well I think the technical barriers are still huge and it's a mistake to say they're not we will squiggle around the exponential curve of progress up and down and there will be moments in it down that could last months or years where people are like the AI winter is here and you know there's a class of person that loves to call the top in markets and research progress and whatever and like a friend of mine that I used to think was this very brilliant predictor of recessions then I realized he has forecast like eighteen of the last two recessions and there will be a lot of people that will say finally AI progress is over and there will be periods that are difficult there will be periods where we're walking in the wilderness and at some point people will be right but people are so desperate to say now it's gonna stop working and they have been for the last eight years and it's just been this relentless upward elevator of progress it is possible in fact I would say it's probable that we're missing some big ideas to go all the way and I have my own thoughts of what they are but honestly there's speculation and predicting the future is hard what I can say with certainty is the things that we know work are gonna go a lot further that's an exponential curve the flood of talented people if you go ask like any really smart 18-year old studying computer science in college what they want to work on very likely to say I and the flood of talent into the field is not an exponential so that's two together algorithmic gains sort of keep going pretty well and so I think there are things that will work better than we thought and worse than we thought and we will hit some dark periods of stumbling-blocks but the biggest miracle of all is that we got an algorithm that can learn full-stop truly legitimate we have an algorithm that can learn and it seems to keep scaling with more compute in my whole career the central lesson has been that on scale scale things up more than you think and when everyone when people see a a curve that's going like this and it stops here and there has to predict you know the next ten years of progress my default assumption is to sort of believe that the curve keeps going for a while at least and most people's default assumption seems to believe that it's gonna keep going on that same exponential for three more months and then perfectly flat line which is sort of a weird framework and I don't think what's gonna happen talk a little bit more about meta learning about the ability to do deep learning and machines that learn eventually that's the most exciting development you seen recently maybe you could show a little bit more about that um I would say just more generally generalized learning is exciting of many forms you know algorithms that can learn their own problems that can go off and explore the ability to learn a lot about one task and apply it to another task the ability to pre train these big models and then use them to solve other problems with their knowledge of the world I think human intelligence is very near is very close to this concept that you can take existing information and thoughts and apply them quickly to new problems and and it's actually I think it's remarkable how quickly humans can learn it takes a long time to train up maybe it takes like 20 years to get pretty smart but then you can learn a new thing very fast and then you can apply that new knowledge that you're told once or a few times to solve a new problem in like three seconds and the fact that we're beginning to see that happen with AI I think is quite remarkable I was most impressed with just looking at the amount of investment required for the data to experiment to be able to see how much compute power was thrown at that problem do you see that accelerating or DC that there only be a few companies that have the power to invest at that scale you know I just had this like wistful nostalgia for when our compute bills were that cheap it's just gonna keep going so I guess that opens the room for larger geopolitical partners governments to be able to invest in this way mm-hmm maybe but I think you know those sort of the ability for technology to lower costs have more and more people viewer and fewer people have more and more influences remarkable and I think if you think about the kind of big iron engineering projects of the past I had a project the Apollo project something like that those had to be done by nation states so much money and I think we actually can do it without we do need a lot of money but like not government scale there was a statement you made about avoiding the wrong thing in a kind of famous 2016 New York article you said some things we will never do with the Department of Defense so I wonder could you discuss what you mean by that I mean you have you have some nation state actors that people worry about doing the wrong thing or okay well first of all I would like to say there a lot of things we would do with the Department of Defense and I think the current mood through some parts of Silicon Valley which is like we hate the US and we hate the US military even more it's just like an awful stance and there's plenty of times like where if asked to help our country we'd be proud to do so there are some things that I would say we wouldn't do and we'll you know we have general thoughts but the interesting ones are on the gray area in between we have to make case-by-case decisions but I think in general we the United States citizens Western society whatever you want to call it are better off if the United States government remains a powerful force in the world than if they don't and we are happy to help in a very different setting you said once that growth masks all problems that you were talking about just investment in startups growth in general is there a version of that that applies to artificial intelligent about of scale scaling things up works really well but it also papers over other problems so in companies that's obvious you know you can sort of cover over deep-seated cultural problems because everyone's excited by the growth and in AI it may be that because scale keeps working or not doing as much research on more efficient algorithms as we should return to this some of being being safe and beneficial thinking about an early warning system sometimes when you're I've never been skydiving but there's a concept of a ground rush where you're approaching the end you realize that it's getting really close and some some of your talks we talked about the timeframe for AI adoption it's not being very important it's a blink of the eye and they kind of eternal seeing things 30 years versus 10 years do you think that is the dumbest debate in Silicon Valley a lot of competition for that title I still think it wins do you see any early warning systems like that for us like like we're gonna gonna go splat when we start seeing ground rush with this these changes I mean there are there are a lot I think when the system can start doing things like saying you know you asked question X it seemed like you really meant why is that accurate and being right most of the time that to me will feel like a moment to start taking things really seriously do you see the early indicators right now of things were you mention little bit but opening eyes doing any other companies that you see doing things that you think that's a those are major inflection points and growth and I think there have been many great results throughout the last field over there throughout the field over the last 12 months but there's not like there's not one that's like alright this is like this is the definitive thing hmm talk a little bit about there's been a lot of news around the recent investments in open AI microsoft's recent large investment was one of them there are some people that think like how is that going to be users buy more compute power do you see more what kind of themes would you see that investment comes through data in people there's a there's a talk you gave I think it was that same that Google address where you talked about the most important invention that you saw happening was the joint corporate which is the joint I was a joint stock corporation yeah that was that I think that was what I someone asked me about the most important invention of the Industrial Revolution right right right and what I said there is you know studying that as a kid I would have had a very different answer and I would have picked I would have tried to think really hard and pick the one specific invention that's had the mo impact on the world since then studying it as an adult I and maybe particularly given the career that I chose or used to choose I think there was this one thing that enabled so much which was the British government decided that they would sort of grant second-order sovereignity to companies and that they could have this legal thing where a bunch of people could be aligned and you could have kind of capital and people working for it and there was sort of you got this new legal structure and that is such a powerful idea before that you were basically limited towards small groups of people that could trust each other and and then all of a sudden you had this entity that could glue a bunch of people and capital together and and it had this incentive structure everyone wants the share price to go up where everyone's the sentence more in line I really do believe that incentives are super power and if you can get incentives right or make incentives better that's the thing you should work on and so the British government invents this one single thing that enables this incredible boom in terms of not just coming up with inventions but making them great and getting them into people's hands and that is so underappreciated of how valuable that was it's a little bit about ecosystems and helping people align their networking effects do you do you see that same thing happened today when the artificial intelligence world do you see you wish that these people would align together to be able to drive that's some kind of economies of scale I'm pretty happy with what we're doing there I'm pretty happy with what we've been able to align I think more of that would be good but it's off to a good start you talked a little bit about for those of you aren't familiar with what's going on right now with open e I may be what you see happening um honestly there's no way I can make this sound exciting we show up every day and we bang on our computers and we try to get like algorithms to work and then we find out it was some stupid bug and we all got too upset with each other no it's a little bit better than that we are trying our hardest to discover what makes intelligence work and we are trying to not think about how we get our applications a little bit better next year but over sort of the long arc of history what it takes to make machines that truly think and we do all sorts of things along the way but that's it you've mentioned a few times even though you have the optimistic voice that there's a few areas where you worry about risk with the advent of AI security being one of them you thought that would be I think you called a high watering risk would you talk a little bit about some of the risks that we hope to mitigate that you think we really have to address the next couple of years ai specifically are sort of other like cific oh yeah yeah I do think this one application of a I don't think in the next in the next couple of years but on a longer timeframe is the threat that advanced AI will pose to cyber security I think is likely to be huge and even without that cyber security is difficult so getting that I think that's like a great problem to focus on I for a quantum computing using the same breath that that would disrupt things in a major way do you see what do you see happiness a couple of years with like you used to have a broader view of investment couple of the most people yeah I mean people love to talk about quantum computing breaking encryption and I think that is like not a thing to be super concerned about I think the number of logical qubits that it takes to do that are far enough off that we'll know when they're getting close and I also think that we have plenty of time to transition to quantum resistant encryption I wonder if we could zoom in there's a lot of startup founders here in the room with us today and I wonder if we could zoom in a little bit on this idea of memetic innovation you use this phrase a little bit and some of your previous talks that you worry that you are extremely well researched by that very impressed you worry a little bit about copycat innovation people just copy the idea before them and that's not true innovation in the artificial intelligence real do you worry about that or is it truly groundbreaking enough for the Sun Edition yeah I mean I think it's like basically deeply wired into human nature and I think in any setting 90 in the high 90s percentage of people are gonna be super mimetic and in my experience it's difficult to stop but it's okay because the other 2% of people drive the world forward so most people will be sort of very incremental / mimetic and a few people will be truly original thinkers and that's all it takes lucky for us we had a vc panel here just before you started where we talked about what they're investing in and the buzzwords of artificial intelligence where you just anoint a simple algorithm that you created as artificial intelligence what do you for someone to truly say I'm doing AI in a start-up like what's the what kind of bar do you have to hit you know it's real every they don't invite me on VC panels anymore because I can't sort of like keep my facial expressions they don't know but every few years there's like some buzzword you know we're gonna do this with social we're gonna do this with podcast we're gonna do this with crypto we're gonna do this with AI and and basically I think by the time you get like say three buzz words and the first two sentences of a startup pitch you can pretty safely ignore it and even one you should be like a little bit skeptical unless they're clearly doing it so the number of startups say we're in a I drove an X and are actually AI driven it's I don't 1 in 21 and 50 something like that and and the lesson here is like startups which themselves however they think will work VCS often fall for it um the good VC is dig in and don't fall for it oh and if we look a little bit about GPT to you in veiled this but didn't completely put it out in the open in the public domain we did eventually we just did a sort of staggered release see it's a very topical it's a topic that all of us care about right with recent politics etc you talked a little bit about the reason why you in Vail that and maybe any if that pretty signals anything else um I do think you should expect us to do more stage releases like that when we develop a technology that we think is probably safe to release but we think will eventually become unsafe as it scales up we'd like the world to get a heads-up and I think the world have gotten used to over a period of time photoshopped images and now people know not to trust them but people do still trust text press releases news whatever for the most part and you know like a new thing that I think will happen is we're not that far away from entirely faked videos of world leaders saying whatever you want people tend to trust that too and I think the world needs time to adapt to that new reality and part of our goal with how we handle that release was to say there is a change we're gonna get through it but you need to think about this as a possibility you being regular people who are reading the Internet policymakers whatever and that was our goal with how we did that my guess is that someday far in the future when world leaders give an address they cryptographically sign it and we just get used to that you know all videos can be fixed or or they you know they only delete it from their account or whatever but there will somehow be verification and I think the world needs time to adapt to that you mentioned in a previous address about the balance between politicians thinking about short-term economic impact on life versus the world as it will emerge in twenty thirty years and trying to balance these two things which are two very different issues do you think our world leaders think enough about artificial intelligence and what should be on their minds as they think about their constituents I mean clearly not but the list of things I would like our world leaders to do differently before they became a experts is quite long anything in particular you want to raise now in front of this many people I always that's great well we have some time for a few community from the audience is right final you want to share with us before we know it that was really fun okay awesome thank you so much so I think we have a we've a microphone and somebody in the audience we're going to open it up to you in the audience to ask maybe a few questions for Sam if you if you want to just raise your hand if you have any questions you want to be able to ask I'm also happy to talk about sort of like a I'm more relevant to business today if people are interested can you hear me yes okay so I wear a bunch of different hats and food tech space is one of them Y Combinator seems increasingly interest in the food tech space as evidenced by recent investments we even buys we advise and invest in this area she meets being one of them these companies in cellular agriculture and such are trying to solve the scale-up problem very multifaceted with a combination of micro fluidics experiments compute supply chain etc in the context of this of this fireside chat where do you see something like open AI where there's maybe limited data available but they're trying to incorporate these things into how they're solving their problems I am far from an AI I have not followed food tech closely I really want it to happen but I've been like a vegetarian my whole life I've tried to eat the fake meat I've decided I just really don't like the taste of meat and I have not personally been super involved in the space I don't have a sense of what the biggest problems are I have not spent a lot of time with the companies but I'm really happy it's happening I just don't have any expert opinion to offer about how to apply either sorry fishing thank you my name is Addison Stelling with intersect 360 research I like that you were talking about concepts like when AI becomes more intelligent than humans and then you separated that from consciousness do you have a definition of what constitutes intelligence for those content for that context something about the ability to learn new concepts based off of existing knowledge the ability and maybe something about the ability to sort of learn them fairly quickly we talked about the right metrics here but but I think intelligence is deeply related to the ability to learn which is why I think we're gonna get there because we have algorithms that come on thank you next question yes hi I was wondering in terms of AI and international collaboration what do you think Silicon Valley should be paying attention to more in terms of AI developments outside if there's any and and and how do you think you know globally we should be working more together in terms of whether on the technical or governance side that they I so right now I think it's pretty collaborative in researchers from around the world published their work and work together quite openly I'm nervous that is on a path to get much harder I certainly think the optimal long-term outcome for the world is close international collaboration and not sort of an arms race between nations and I hope that will happen but I'd say it's well out of the area where I feel like I'm an expert and could make a confident prediction I think it's clearly in the best long-term interest for the world and one of our goals at open AI is to push policy to that direction and I think I I'm somewhat hardened by how it's gone so far I think one of the really great values of academia is sort of for all for all of the flaws and faults of academia I think it is done better than any other segment at long-term open international collaboration around ideas you have time for two more questions hi I recently joined this industry and I saw that there's a big diversity issue huge right so I wanted to know what is your opinion about how you want to solve this issue and how do you think it's affecting AI in particular one of the things that we've done is to start what we call open air ice scholars which is a way to get really talented people from whatevers backgrounds exposure we mentor them we teach them and that helps at the rate of a handful of people per year and that is kind of our own capacity but that's clearly not enough to solve the problem and I do believe that the people who build these things not there any intentional fault but just from the way it works put a huge amount of themselves and their own views of the world into the systems and so you've got to have more diverse input and yet if you look at sort of the rates of graduating PhDs in AI it's a it's like incredibly striking and incredibly striking in terms of being non diverse and so I think what has to happen is we need a lot more versions of opening eyes scholars like programs and people in the field need to sort of commit some other time to mentoring diverse people and also we need to really figure out how to change the ability to get new people into the field whether rather than wait for the pipeline of AI PhDs to catch up which is a many many year process I think if we don't do that we will end up having no matter what we do no matter what open IR others do to sort of get really good representation and advice the people that built the system always have sort of a huge amount of influence over what actually happens again not there any negative intentions and so will end up in us off the world alright last question hi there a quick question on kind of somewhat related diversity but this is actually related to data and approaches by different countries so my questions relate to sort of like how you look at for example China who has right now they're doing a lot of surveillance so in terms of at least visual data for AI they have a lot more than what the u.s. currently seems to have or at least that's what the press is putting out there what your thoughts are about the different approaches in the long run in the future I used to really really worry about this as we talked about earlier I have shifted my own thoughts to thinking it's gonna be more about a compute edge than a data edge and I and I certainly hope that's true because otherwise governments like China have a huge edge in terms of more data than anybody else I don't think the society we want is a super high surveillance state at least not the one that I personally want and the trade-off of that is just less less data however the Internet is giant and we forget just how China the Internet is even in a world where we do need a lot of data I think we can get it and the edge is mitigated so I guess I have one final question just to wrap up kind of the theme in a day you've given us some great perspectives looking at the last couple of years if you were to look at the near term horizon the next one or two years the most exciting developments you see happening what do you most hope for in terms of development that you think would yes we say on this stage in one or two years and say that was really a remarkable moment what do you most hope for the next next couple of years I think unsupervised learning and the ability to look at huge amounts of data and understand the underlying concepts that's just gonna really surprise us on it to your timeframe and it's gonna do amazing things awesome well thank you very much thank you thank you [Applause] [Music] [Applause] [Music]
Info
Channel: Rescale, Inc.
Views: 18,536
Rating: 4.9215684 out of 5
Keywords: big compute, big data, sam altman, openai, ai, artificial intelligence, hpc, cloud
Id: 0TRtSk-ufu0
Channel Id: undefined
Length: 44min 38sec (2678 seconds)
Published: Wed Feb 19 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.