Chat with OpenAI CEO and and Co-founder Sam Altman, and Chief Scientist Ilya Sutskever

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to Tel Aviv University [Applause] with us today University supporters management faculty students we also have guests from Microsoft and other areas of the Israeli high-tech Echo System I'm nadav Cohen faculty at the school of computer science and it is with great pleasure that I invite on stage Sam Altman and Ilya sutskavel CEO and chief scientist of openai [Applause] so thanks a lot for uh being here oh greatly appreciate it thank you for having us I think we're going to start with a brief intro of yourselves Ilya please emphasize the Israeli roots oh wait for me to introduce myself a few sure yeah I mean hi everyone so indeed I from the ages of 5 to 16 I lived in Jerusalem I studied in the open University from 2000 to 2002. [Applause] to that I moved to the University of Toronto where I spent 10 years and I got my bachelor Masters and PhD degrees and already during grad school I was fortunate to make to contribute to important advances in deep learning then with a few people we started a company that was acquired by Google and I worked there for some time and then actually one day I received a cold email from Sam saying hey let's let's hang out with some cool people and now I was very curious I went and that was the first original dinner with Elon Musk and Greg Brockman where we decided to start open Ai and then we've been at it for quite a few years so that's where we are right now thank you I uh I was like very excited about AI as a little kid like a big sci-fi nerd uh never really thought I'd get the chance to work on it but then it ended up at University um studied it for a little while it wasn't working at all this was like 2004 timeline um dropped out uh did startups with became a startup investor for a while um really got excited about what was happening with AI after the advances that Ilya mentioned um sent him that email and here we are okay so um to get things started I wanted to ask you what do you think is it about open AI that makes it a leader in generative AI especially when it's uh competitors are often much larger have more sources so what do you think are the key advantages Focus and conviction we believe I think we always believe further out on the horizon than the bigger companies and we're more focused on doing what we do uh I think we have a lot of talent density um and that's talent density is super important I think misunderstood and then we have a a culture of rigor and repeatable Innovation and to have both of those in one culture is difficult and rare yeah so I can only add a small amount to Sam's answer which is you know test one two three okay to add to Sam's answer very I cannot I can only add five percent to Sam's answer which is progress in AI is a game of faith the more faith you have the more progress you can make and so if you have a very very large amount of Faith you can make the most progress and it sounds like I'm joking but I'm not you have to believe you have to believe in the idea and to push on it and the more you believe the harder you can push and that's what leads to the progress now it's important that the thing you believe in is correct but with that caveat it's all about the belief thank you very much uh so moving on to other topics progress in AI these days and for a while now is largely driven by industry right so I was wondering what you feel should be the role of academic research in the field as it evolves yeah no destroyed is a very things have changed a lot Academia used to be the place where the most Cutting Edge AI research has taken has been has been happening now not so much for two reasons the amount of compute and the engineering Academia has less compute and generally does not have an engineering culture and yet Academia can make very dramatic and significant contributions to AI just not to the most cutting-edge capabilities the place that Academia can contribute to there are so many mysteries about the neural networks that we are training we are producing these objects of miraculous and unimaginable complexity what deep learning is is the process of alchemy we take the raw materials of data plus the energy source of compute and we get this intelligence but what is it how does it work what are its properties how do we control it how do we contain it how do we understand it how do we measure it these are unknowns even the simple task of measurements how good is our AI we can't measure it it wasn't a problem before because AI wasn't important but now that AI is important we are realizing we can't measure it so it's just off the top of my head some examples of problems which no one can solve you don't need a giant compute cluster you don't need a giant engineering team to ask these questions and to make progress on them and if you do make progress that will be a dramatic and a significant contribution that everyone will take note immediately thank you so so it sounds from your words and actually um relate to that that there isn't exactly a balance between the progress in industry and in Academia we would like to see more contributions of those types so I was wondering is there anything you think that can be done to improve the situation especially maybe from your position to somehow support or yeah so how can like I would say two things the first and the most important thing I think is the mindset shift I think that so I'm a little bit removed from Academia these days but I think there's a bit of a crisis of what are we doing and one thing that creates a lot of confusion I claim is there's a lot of momentum around a very large amount a very large volume of papers is being written but the important thing is to think about the most important problems just focus on them the mindset shift on focus on the most important problems what is it that we can't do what is it if we don't know we can't measure those things we can understand them realize the problem once you understand the problem you start moving towards it that's where we can help like we have academic Access program where academic universities apply to get compute sorry to get access to our most advanced models they study them they write papers we've done it even with gpt3 even before we've had the first product many universities have written papers studying and their models their properties their biases and I think there will be if you have more ideas by the way I'd be happy to hear them yeah so we should definitely discuss these things uh offline further um in uh you know I need to fit into the time that I have you mentioned publishing so it seems to me as somebody in the field that um some believe or at least it's a fair argument that the level of scientific transparency is somewhat in Decline with regards to research going on in industry and while there are players that are companies that really promote open source publishing their models publishing their code others do so less and then some say that includes also open AI so I was wondering um what do you feel about um first of all if you agree with this and if so why what what do you believe is the right strategy why is open AI strategy the way it is we we do open source some models and we'll open source more models over time um but I don't think it's the right strategy to open source everything uh if the models of today are interesting they have some usefulness but they're quite primitive relative to the models we'll create and I think most people would agree if you you know make a super powerful AGI that has wonderful upsides but existential downsides you open source may not be the best answer for that um and so we're trying to figure out the balance uh we will open source some things we will over time as we understand models more be able to open source more and we have published a lot I think like a lot of the key ideas that people other people are now using to build loms were published by openai um and I think like you know the from the early GPT papers scaling laws that's from the rohf work but it's a balance that we have to figure out as we go and we have like a lot of different tensions on us to to successfully manage there yeah so are you considering models where you maybe publicize things to selected crowds maybe not open source to the entire world but to scientists or is that something you're considering when when we finish training gpd4 we spent a long time almost eight months working to understand it to ensure the safety to figure out how to align it had external Auditors red teamers and scientific Community engagement so so we do that and we'll continue to do it okay so I want to talk a little bit about the risks I know it's a topic that's being discussed a lot before we get to the opportunity so this is just a couple of minutes on that because I do think I agree it's important so um there are probably at least three classes of risks that one can imagine one is economic dislocation you know jobs becoming redundant things like that another one could be maybe a powerful weapon in the hands of few one person for example a hacker could do probably something equivalent to thousands of hackers before if they are able to use these tools and in the last maybe is which is the sum the most concerning is a system that gets out of control even the people that uh triggered it to do something can't stop it so I was wondering what you feel like is a likely scenario on each of these okay the likely scenario on each of the risks economic dislocation let's start with that so you mentioned three economic dislocation hacker super intelligent they have been out of control yeah so economic dislocation indeed like we already know that there are jobs that are being impacted or they're being affected in other words some chunks of the jobs can be done you know if you're a programmer you don't write functions anymore copilot writes them for you if you're an artist though it's a bit different because a big chunk of the artist's economic activity has been taken by some of the image generators I think that indeed it's going to be not a simple time with respect to jobs and while new jobs will be created it's going to be a long period of economic uncertainty there is an argument to be made that even when we have fully like we have full human level AI full AGI people will still have economic activity to do I don't know whether that's the case but in either events we will need to have something that will allow for a soft it's often the blow to allow for a smoother transition either to the totally new profession that will exist or even if not then we want the government the social systems will need to keep Kane on the offense question the hackers yeah that's the tricky one indeed AI will be powerful and it could be used in powerful ways by Bad actors we will need to apply similar Frameworks similar to the one we apply with other very powerful and dangerous tools now mind you we are not talking about the AIS of today we are talking about as time goes by and the capability keeps increasing you know and eventually it goes all the way to here right right now we are here today that's where we are that's what we're going to get to when you get to this point then yeah it's very powerful technology it can be used for amazing applications you can say cure all disease on the flip side you can say create a disease much more worse than anything that existed before that'd be bad so we will need to have structures in place that will control the use of the copper technology the powerful you know Sam has proposed them a document where we said the iaea for AI to control in very powerful Technologies but for AI specifically that's the iaea is what is the organization that controls nuclear power to the last question the super intelligent AI That's out of control yeah that'd be pretty bad yeah so it's like it would be it would be a big mistake to build the super intelligence AI that we don't know how to control can I add a few yeah of course of course I have nothing to add to that last sentence that's I strongly agree um on the economic points I find it very difficult to reason about how this is going to go I think there's so much Surplus demand in the world right now and these systems are so good at helping with tasks but for the most part today not current jobs that I think in the short term the picture actually just looks pretty good it's going to be a lot of dramatic productivity growth and we're going to find out that if you can make programmers two times as productive there's more than two times as much code that the world needs so it's it's all good um in the longer term I think these systems will do more and more complex buckets of stuff and categories of jobs some of them will go away but some others will turn out to like really need humans and human like people really want humans in these roles in ways that are um not very obvious one example is that one of the first times the world saw AI was when deep blue um beat Kasparov and everyone said you know chess is totally over no one is ever going to play chess again because it's not interesting and and that was just consensus that everybody agreed with chess has never been more popular than it is right now humans have gotten better at just the expectation has gone up we can learn better with these tools but people still really want to play and humans seem to still really care about what other humans do you know Dolly can make great art but people still care about the human behind the art that they want to buy and that sort of we all think is special and valuable um on the chess example like people watch humans play chess more than ever before too but not very many people like watch two AIS play each other so I think they're just gonna be all of these things that are difficult to predict the human desire to differentiate to create new things to sort of gain status I think that's not going to go anywhere but it will somehow look really different and I would bet that the jobs of 100 years from now look almost nothing like the jobs of today many of them some things will turn out to be weirdly similar um but I do really agree with what Ilya was saying that no matter what's going to happen we're going to need some sort of different socioeconomic contract as as automation reaches these like heretofore unimagined heights okay thank you uh so another question on this topic so Sam you recently signed a petition right uh calling for treating existential threat for me AI with great seriousness I'm not sure if I feel you did too any other sign it as well so I was wondering um kind of following this call if there are any steps that you think we Mankind and also maybe companies like open AI should take um to address this problem I really want to emphasize what we're talking about here is not the systems of today not small startups training models not open source not the open source Community um I think it would be a mistake to go put heavy regulation on the field right now or to try to slow down the incredible Innovation I hope we do get to talk about the benefits that's happening but if we are heading towards you know I think what Ilya said about you really don't want to make a super intelligence that is not really well aligned that that seems inarguable and I think the world should treat that not as a you know haha never going to come sci-fi risk but something that we may have to confront in the next decade which is not very long for the institutions of the world to adapt to something and so one idea that we've contributed and I hope that there's far better ones out there is if we could get a global organization that at the very highest end at the frontier of compute power and techniques could have a framework to license models um to audit the safety of them to propose tests that are required to be passed that would help um that would be one way to treat this as a very serious risk we do do the same thing for a nuclear for example okay so let's indeed move on to talk about benefits a little bit so um this is kind of a scientific setting that we're in so I was wondering in terms of the role of AI in scientific discoveries if you have any predictions or thoughts where we're going to be in a few years and maybe in the future beyond that this is the thing that I am most personally excited about with AI I think there's like tremendous wonderful things that are going to happen all over the place huge economic benefits huge Healthcare benefits but the fact that AI can help us do scientific discovery that we currently aren't capable of um we're going to like get to understand the mysteries of the universe and more than that I I really believe that scientific and Technical progress is the only sustainable way that lives get better that the world gets better and if we can go unlock a gigantic amount of new science new technological progress which I think we're already seeing the beginnings of with people using these tools to be more efficient but if you imagine a world where you can say hey I help me cure all disease and it helps you cure all disease uh like this can be a dramatically better world and I think we're not so far away from that okay another major problem alongside diseases is climate change so I was wondering what your thoughts are and the potential role of AI uh there because I I did see you Sam that you did mention it as a potential area for contribution I I think I hate I don't want to say this because it it climate change is so serious and so hard of a problem but I think once we have a really powerful super intelligence um addressing climate change will not be particularly difficult for a system like that yeah we can even explain how here's how you spell climate change you need a very large amount of carbon cup of efficient carbon capture you need the energy for the carbon capture you need the technology to build it and you need to build a lot of it if you can accelerate the scientific progress which is something that the powerful AI could do we could get to a very Advanced carbon capture much faster it could get to a very cheap power much faster we could get to cheaper manufacturing much faster now combine those three cheap power cheap manufacturing Advanced carbon capture now you build lots of them and now you sucked out all this all the excess CO2 from the atmosphere and this plan today is a little bit difficult if you have an AI which accelerates science and engineering very dramatically becomes very straightforward and and I think this accelerates that this illustrates how how big we should dream you know if you think about a system where you can say tell me how to make a lot of clean energy cheaply tell me how to efficiently capture carbon and then tell me how to build a factory to do this at planetary scale if you can do that you can do a lot of other things too yeah with one addition that not only you ask you to tell it you ask it to do it okay so uh one of a couple of questions about open AI products so first of all in chatgpt I was wondering so you released it I heard you say that you didn't expect it uh kind of um spread like it did or so I was wondering is there any application of chatgpts that you saw by others that really surprised you in terms of the value that it generated or the capabilities that it exposed gonna go first yeah I mean I'm just a thing which has given me me personally an endless amount of joy is when my parents told me that their friends used chat GPT in their daily lives so I would say that this was definitely very surprising and very enjoyable for me it's hard to pick just like a couple of the favorite stories because it's it's like it really is remarkable the creativity of the world and what people do when you give them powerful tools um education has been amazing for us to watch the number of people that write in saying like this has changed my life because I can learn anything now or I learned this specific thing or you know I couldn't figure out how to do this and now I know uh there's something that I find personally quite gratifying and wonderful to see about people learning in a new and better way and imagining what that's going to look like a few years from now but if we can just unlock human potential at this rate uh we didn't quite expect that to happen and it's been amazing and then a a fun story that I heard uh just yesterday and I've heard like other other versions of this in the past was a guy that spends two hours every night with his kid collaborating to make up bedtime stories um and that it's just you know the kid's favorite thing and it's become this like special moment uh every night that they do okay thank you and that's just one last small question before we move on to questions from the crowd so um what in terms of what you can say what is the most futuristic uh product that openai is uh working on these days most futuristic product or system yeah it's like we don't think about it in terms of products we think about it in terms of can you improve the AI can you produce the next generation of the AI of the model of the neural network which will be more reliable better at reasoning more controllable better the whole thing so you do this and then you get a whole new world of applications a bit hard to predict but we expect everything to become much better and very significantly I I hope the world is never odd at us again I hope that you know people had an update with chat gbt but from here on it is one continuous smooth curve of progress at every stage we're confronting the risks successfully it always feels you know like it's doing what you want and it's safe to use but every year uh your expectations go up and we deliver on them and it feels like this gradual acceleration of Technology but in a way that very much is a tool that serves you okay thank you uh now let's move on to some questions from the crowd that there's going to be a microphone that was a quick draw yeah but we will try moving forward uh people raise their hand have people raise their hands and we're gonna choose so well both of you the question is could the open source element potentially match GPT 4's abilities without additional technical advances or is there a secret sauce in gpt4 unknown to the world that sets its apart from the other models or am I wasting my time installing stable vicunia 13 billion plus wizard am I wasting my time tell me [Music] [Applause] all right so to the open source versus non-open Source models question you don't want to think about it in in binary black and white terms where like there is a secret source that you'll never be rediscovered what I will say or whether gpt4 will ever be reproduced by open source models perhaps one day it will be but when it will be it will be a much more powerful model in the companies so there will always be a gap between the open source models and the private models and this Gap may even be increasing this time the amount of effort and engineering and research that it takes to produce one such neural net keeps increasing and so even if there are open source models they will never be they will be less and less produced by small groups of of dedicated researchers and engineers and it will only be the Providence of the company big company hi can you tell us more about the base model before you lobotomize that aligned it what okay the basement of gpt4 what about it how was it before you lobotomized it uh we we definitely realize that in the process of doing rlh on the models it loses important capability we're studying how we can preserve as much of that as possible um the base model is like not that easy to use um but what we'd like to get to is something that does follow instructions and gives users as much control and as much capability as possible and doesn't get us in legal trouble although like you know we've discovered a lot of stuff like refusals to help with that so we want we we totally hear the request for more flexible models um and we're trying to figure out how to do that and and give users more customization over them okay we have a question over there first of all thank you so much for this talk it's truly invaluable I'm really curious to know what's in your eyes or the top sectors that can be impacted for the better by individuals and small companies I can can you repeat the question please there is a lot of echo sorry um really curious to to know what's in your eyes are the top sectors that can be impacted for the better by individuals and small companies it one of the most so so one of the reasons we're doing this trip around the world is to hear from people about what they want what they like open AI to do um what their concerns are you know how they're thinking about regulation uh how they're thinking about how they want this to be integrated in society but the other is to talk to people that are building on top of our API and understand what they're doing and what they want to do and for me the most fun part of this trip has been meeting developers and just being amazed at the creativity the scale of the businesses being built um the you know one two or three people that are like building something that has now gotten to real scale and a product that people really love and how that is happening in every industry um you know when we do these developer roundtables almost never are two people working on the same kind of sector even it's the diversity that is the coolest thing I think any vertical you want to pick AI is going to impact somehow and and this is probably the most magical period since the launch of the iPhone at least for a technological tidal wave to go do incredible things so I think the most exciting part of this is it's not one or two sectors it's just find someplace that you're passionate about and go do it okay let's now every person ask a question asks a question start with name and affiliation okay um the global one with the cachet broadcast uh first thank you again for coming here I appreciate this talk very much and secondly if you truly believe that AI imposes a danger to humankind why keep developing it aren't you afraid for your own dear ones and family and secondly should regulation will impose upon you upon open Ai and other AI companies will you obey or behave much like say Mark Zuckerberg who try to evade every uh regulation he finds thank you I think it's a super fair and good question and the most Troublesome part of our jobs is that we we have to balance this like incredible promise in this technology that I think humans really need um and we can talk about why in a second with confronting these very serious risks um why to build it number one uh I do think that when we look back at the standard of living and what we tolerate for people today it will look even worse than when we look back at how people lived 500 or a thousand years ago and we'll say like man can you imagine that people lived in poverty can you imagine people suffered from disease can you imagine that everyone didn't have a phenomenal education were able to live their lives however they wanted it's going to look barbaric I think everyone in the future is going to have better lives than the best people of today and again the upside there is is tremendous so I think there's like a moral duty to figure out how to do that I also think this is like Unstoppable like this is the progress of Technology it won't it won't work to stop it and so we have to figure out how to manage the risk we were formed as a company in large part because of this risk and the need to address it we have an unusual structure we have a capped profit I believe that incentives are super hours and if you design the incentives right you usually get the behavior you want so you know like we're gonna all do fine we're not gonna make any more or less money if we like make the numbers go a little further up to the right we don't have the incentive structure that a company like Facebook had and I think they were very well-meaning people at Facebook they were just in in an incentive structure that had some challenges so we tried to take AGI we tried to as Ilya always says we tried to feel the AGI when we were setting up our company originally and then we would set up our profit structure so how do we balance the need for the money for compute with what we care about is this Mission and one of the things we talked about is what's a structure that would let us warmly Embrace regulation that would hurt us the most and now that the time has come for that we're out here advocating around the world for regulation that will impact us the most um so of course we'll comply with it I think it's more easy to get good behavior out of people when they are staring existential risk in the face and so I think all of the people at the Leading Edge here these different companies now feel this and you will see a different Collective response than you saw from the social media companies I think all the skepticism all the concern is fair we wrestle with this every day and there is not an easy sell by answer hi my name is CEO at the small business and I have to mention that we use the GPT for a lot a lot and lastly I spoke with a VP in Microsoft and she told me how they decided to listen to AI because all of the Arab is testing the AI was right and I just wondering what is the gap between the AI you use like we have a lot of limited limitations with the tokens and all of the things and you don't you don't have but what is a gap between the power that you have to the power that we can use the Gap that there is between the models that you use and the models that we use is the question well I mean right now gpd4 you know you train you have access to gpt4 and so do we indeed we are working on the next Future model maybe I'll describe the Gap as follows as we keep building AIS of increasing credit capabilities there will be a larger Gap a longer testing period a longer period where we will read team understand the limitations of the models understand all the way you know as many of the ways as possible in which it could be used in ways that we deem unacceptable and then expand it gradually so for example right now gpt4 has Vision recognition abilities which you have not rolled out yet because the finishing touches weren't quite there but soon we will so maybe so I think that would be an answer to your question um probably not too far in the future hi uh I'm David I'm a data set assigned scientist that uh classified I'd just love to know uh what are your thoughts about the we have no modes document that was uh released lately a leaked document I the the thing that is special about open Ai and I think the thing that is so misunderstood by that document aside from the fact that we have like a gigantic number of users and people that alike have formed some sort of relationship with us and our products is what openai is special about is figuring out what comes next it is the ability it is easy to copy something once you know it can be done and in that sense sure it is very hard to go figure out what to do next and the ideas the Big Ideas the medium size ideas the small ideas and the careful execution on them that it takes to get from here to Super intelligence that's what our mode is so sure like once we go do the next Paradigm everybody will get going trying to copy that too but we'll already be working on the next one hey Sam up here uh hello we're up here my name is I'm a YouTuber I'm also a CEO of a new startup I have a question regarding super intelligence in the rokus Basilisk dilemma can you kind of elaborate on how chat and open AI stands on that dilemma so while Rocco's basilisk is something that we are not very concerned about we are definitely very concerned about super intelligence and just for context not everyone may not everyone in the audience May understand what we mean by super intelligence right what do we mean one day when like it will be possible to build a computer a computer cluster GPU form that is just smarter than any person that can do science and engineering much much faster than even a large team of really experienced scientists and engineers and that is crazy that is going to be unbelievably extremely impactful it could engineer the next version of the system AI building AI that's just crazy so our stance is that super intelligence is profound it it can be incredibly unbelievably positive but also very dangerous and this is the engine need to be approached with care this is where you propose the iaea approach to the very very Advanced Cutting Edge systems of the future the super intelligence and also there is a lot of research that we'll need to do to contain the power of the super intelligence to align them so that their power and their capability will be used to our benefits to the benefit of people so that's our stance on super intelligence it is the ultimate challenge of humanity super intelligence though if you think about the The evolutionary history of humanity so four billion years ago there was a single cell some kind of a replicator then about a number of billions of years you had various different single cellular organisms then about a billion years ago you had multicellular life several hundred million years ago you had maybe reptiles 60 million years ago you had mammals 10 million years ago you had primates one million years ago you had the homo sapiens then 10 000 years we had the writing then we had the farming revolution then the Industrial Revolution technological Revolution and now finally the AGI the super intelligence it is the final the ultimate Challenge it can create a life of unimaginable Prosperity which Sam alluded to but it is also a great Challenge and it is a challenge that we need to face and overcome hello Ronnie Dory from kalkalist hi this is a question for Sam Altman um I was wondering what is your stand about data dignity in the context of AI yeah we think it is very important that the people who contribute data to these systems or the people who in some other way help these systems even if it's not training on their data is are rewarded get the benefit from these systems I I think what these systems really are want to be are reasoning engines um but they will be able to go off and access different data and they will also need people who help teach them how to reason correctly and we we are exploring a lot of ideas about how those people get aligned rewards with the success of the model and also how if you're you know an artist and people are generating art in your style or inspired by you or whatever you get economic benefit from that so I think it's super important to figure out um we're trying to come up with the right approach given both what content creators and content owners want and also where the technology is going so I actually have a question that was collected from the machine and deep learning Israel Community which shows one question this is from Ben netzer which is a media Outlet the question is what opportunities do you see in Israel for the development of AI and its application and specifically something maybe that you see special in Israel I mean if any I think that in the near term there are so many opportunities there's a huge number of opportunities I would say the near term is truly the Golden Age of AI you got the you got it's like you've got an Uncharted Territory of an incredible number of positive obligations and what I'll say just go for it just do it so Sam you work with Israeli Founders startups right in the past yeah um the two things two things that I've observed that are particularly special about Israel number one is the talent density we're very focused on Talent density not not just like absolute amount of talent this is a smallish country that punches way above its weight and has a lot of talented people that you can get clustered into areas and then the second is just the sort of like relentlessness Drive ambition level of Israeli entrepreneurs uh again we had like incredible success in all of the YC efforts we made but those two things together I think are ought to lead to incredible Prosperity both in terms of AI research and AI applications thank you yeah we can take over there hi like in my child gbt history um it it is basically replaced Wikipedia for me I used to spend a lot of time learning stuff on Wikipedia and I think the thing you would find if you looked through mine is how effective it can be at learning new knowledge in the Wikipedia style so I don't do like the Deep teach me everything about physics that I know some people do but like I heard about this thing and I want to learn about it as quickly as possible you would find a surprisingly effective tool for that no we need we need people with microphones um hi I'm up here Hi Sam hi we can't so my name is arbel I'm a volunteer here in Tel Aviv I'm 18 and I wanted to ask you uh what do you look for in a new employee in open AI thank you nice meeting you um Drive taste uh collaboration intelligence like the ability to contribute be a good team like like contribute to the output of the entire organization which could mean come up with the next breakthrough or it could mean like really be a great engineer to help us build these systems or it could just mean like be really helpful to other people and contribute that way definitely a belief in super intelligence and a feeling the weight of the importance of getting this right in terms of getting the benefits but managing the risks I don't know what else oh sounds like a pretty comprehensive answer all right you should definitely apply yeah hello hello up here again in the balcony to the right we can't we're just guessing yeah it's dark up there my name is Alon I am the CEO of a company called Benny Gohan it's the leading mathematics [Applause] [Music] for teaching mathematics we're the leading provider of mathematics textbooks and content for the last 40 years here in Israel first of all I wanted to ask how do you plan to improve chat GPT skills Hebrew skills and and the second the second is related to something you were talking about how do you what is your vision of education and Ai and like practic practically in in schools for our kids how we can improve and motivate them and thank you very much you want to take it yeah yeah um education in math well obviously textbooks will be upgraded you read a textbook but the textbook doesn't answer your questions so it will be possible with the aid of the kind of AIS that we and others are building for you to have a conversation about the subject matter so that makes for a much more efficient learning experience it will apply to math it will apply to everything else eventually we will be moving or rather eventually we already are moving to a world where every student has a dedicated private tutor not there yet it's not quite good enough but it will be I'm actually not sure if it's a Black Swan event in employment I think it's it may be a sort of more gradual and very predicted change that's going to happen where we now have these systems that are good at doing tasks but not whole jobs and they get better and do some jobs but it's also difficult to predict the role of the government I think is going to be to provide some sort of new cushion what the format of that will be I think different governments will try different experiments and we'll see what works better and governments will copy the ones that work better in almost every conversation I've had these last few weeks on the road every government has been very thoughtful about this they have different ideas about how best to solve it but it is maybe the top of Mind issue at least the top three issue that I think any world leader is thinking about with this so I think people are on it we have time for one or two questions hi my name is Ben up here and I'm studying computer science and I'm graduating soon so what I wanted to know why should I learn to still have a job in 10 to 15 years from now well I think learning computer science is good no matter what I almost never write code anymore but it was one of the best things that I ever did in terms of like learning how to think learning how to address problems so I I think it's valuable for its own sake even if you're the job of a computer programmer looks very different than it looks today um the main skill I think to learn is how to learn how to learn fast how to learn new things how to give it a sense for what's coming how to be adaptable how to be resilient taste how to figure out what other people are going to want how you can be useful so again there's no question in our minds that jobs are going to change the nature of work is going to change but also I cannot imagine a world where people don't do something with their time to create value for other people and all of the benefits that come with that and you know it may be in the future the thing you and I care about is who has the cooler Galaxy but there's still going to be something okay hi up here last one hello my name is I'm co-founder of deep pathology my question to both of you is uh you guys are making history how do you want history to Remember You I mean in the best possible way [Applause] [Music] hello my name is Amir I am 17 years old entrepreneur and I want to know what is your tips for first time startups in these ages I didn't quite hear the opinion for what first time startup I am 17 years old entrepreneur this is the best this is really I think I think the best time to start a startup that I have ever seen um this is I think yeah I think it's actually better than the iPhone I think it's maybe maybe the only comparable thing is when the internet launched uh if you are a first time entrepreneur right now you are the luckiest entrepreneur that has existed in a long time you have an incredible new fast-moving technological wave and those are when startups win those are when the incumbents screw up and get displaced the ground is shaking right now that's what you want as a startup and things are possible that most people can't quite imagine and the the opportunity to build value with a new approach doesn't come along very often and this is the big one so like every entrepreneur is a summer child right now um and it's a super cool time hello doing what oh well I wouldn't I totally disagree with that like that's people are gonna build all of the stuff on top of us like yes if you're trying to like come for chat gbt you have had a failure of imagination and probably you will not make something better than like the pure version of Chachi PT but the size of the universe of possibilities right now just the companies that we've met with on this trip is unbelievable um there's so much to go after uh that if you're somehow like worried about us being the incumbent I think you're really not thinking about the problem correctly um Sam Sam here this is going to be the very very last question here here upstairs here on the Belgium sorry yeah go ahead Sam I'm renat I'm a data scientist here in the university and and my question is about your future plans and the world coin project the orb system I'm you know I was I'm an investor and I kind of like helped put the company together but I'm not involved day to day at all um I think it's very exciting I think experimenting with new ways to differentiate between like like to prove Humanity uh in a privacy preserving way and to think about things like Global Ubi and ways to fairly democratize access is a super great area to explore but I'm not close enough to the company to like meaningfully comment on the plans thank you very much for being with us thank you thank you very much
Info
Channel: TAUVOD
Views: 119,146
Rating: undefined out of 5
Keywords: AIrisks, EconomicDisruptions, OpenAI, GovernmentIntervention, RegulatoryFramework, WeaponSecurity, SuperAI, telavivuniversity, chatgpt, sam altman, Ilya Sutskever, study ai, super intelligence, deep learning
Id: mC-0XqTAeMQ
Channel Id: undefined
Length: 54min 10sec (3250 seconds)
Published: Mon Jun 05 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.