Action Items From the Next Generation of Researchers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] all right so thanks very much for the introduction what we wanted to start the panel talking about was basically the giving you guys a bit of insight into the path whereby new researchers are finding their way into this field and into thinking about these questions so I'm gonna ask each of the panelists to go through and basically tell us a little bit of their story that that sort of brought them up here onto this stage so in particular can you guys talk to us about what's the path that you took to this field what were you doing before you decided to start doing research in this area what motivated you to come into this area and what were the concerns that you had as you considered this as a research endeavor so with that al-mahdi if you are okay yeah okay so I'm a second year PhD student at Berkeley right now but I also did my undergrad at Berkeley so in 2015 I guess I was a second year undergrad and I happen to live in this big house where one of my friends in the house started this class on effective altruism at Berkeley so I would like occasionally show up and like one day they were talking about existential risk and may I risk and it was my first time I ever heard of this so naturally I thought it was crazy and I got into a debate with some people there and I don't think they made that great of arguments in the moment but people kept deferring me to this book they were like oh but like heavy read superintelligence and so I was like all right let me go like check this out see what it is and so I read superintelligence and then I was like okay like I get the abstract argument like maybe I don't agree with the pots or whatever but like I got what they're saying but that still sort of stayed in like the back of my head is kind of interesting but sci-fi II until the next year in 2016 I took this class on human compatible AI that was taught by Stuart Russell along Catterick on Tom Griffiths I think Jaime was in it and you're in a to Dylan and that was the first time I saw like oh there's actually like real research you can do on this concrete things like in particular I think one thing Stewart said in like the first or second day really stood out to me it was like we always assume we have this reward function but like where does the reward function come from god and I was like oh my god this is what is God what is the reward function and then I got really interested in the like actual problems that got really interested in value alignment and it just seemed like it was really fascinating like an area that was really new and had these like really high-level vague notions but there's just a lot of interesting conceptual work to be done you know more so than like if I was working on something else like I could work on like improving bounds on bandit Bagh rhythms or something but like this was just more fascinating interesting to me and so then that summer was when chai started and that's kind of how it got into it but I kind of like the thing I want to highlight from the story is although I first heard about this through like the effective altruism irrationalist II communities the thing that actually got me into it was the academic opportunity to work on it so I think the academic legitimization of the field has been really important and like concretization to real things great thanks but the money if you want to go next yeah I are I've from physics like graduated in physics back in 2011 I worked I worked for four years at least and I worked first in physics research as an engineer and then I started a science YouTube channel in Morocco and so I was doing a lot of science communication and tutoring material like Khan Academy style and then yeah eventually it's become an EP FL projects in French in English but that that period got me exposed to to CS to computer science computing in general as a fundamental science of which I was not aware of and I think this is a problem in the Natural Sciences community in the Social Sciences community and as well in the cs community people in CS including top researchers are not aware that this field computing started as a natural science like the most cited paper of Alan Turing is on morphogenesis so I started I got so I started PhD with this motivation let's make comp science and natural science again and yeah my topic was to study robustness of biological processes which is still half of my job and at some point looked at ml machine on in and say yeah like I mean it's tribute to the computing group they do a lot of robustness research let's just like mix up the 30 years perspective of distributed computing robustness into machine learning and somehow I got into the the busing of AI safety research I'll just say like as an aspiring natural natural philosopher which I still aspire to be I'd like to try and to make a career under the slogan boast from in the trenches so there was this there was this rant against Nick's book by some famous AI researcher saying that we down there in the trenches of AI research and then got into renting against Nick's book I think it's a very good thing that we have a generation of both drones in the trenches people who think like a natural philosopher and who are close to boring stuff like convex optimization and stochastic gradient descent I'll say I think convex optimization is super exciting it's well taken all right how many you go next great so I came to this field from the from the origins of control theory and engineering and really I've been interested in safety for a very long time it's just I started looking at this kind of safety only more recently and my earlier work had to do with well we're building increasingly complex systems many of which are automated and they're starting to go from very controlled environments and factory floors to really more deployed systems like cell driving cars and drones and these systems are having to interact with an increasingly complicated world and most importantly they're starting to interact with people and so how can we extend the guarantees that we that we are more or less comfortable with writing for systems that operate inside of a cage to systems that are no longer in a cage and systems that are actually interacting with very squishy and and wobbly things called humor that we can very rarely say anything specific or you know clear-cut about and so that was the the the motivation that took me to Berkeley to start my PhD in robotics and control theory trying to extend safety guarantees to more complicated systems that also interacted with humans and through this path I started looking more into well how do we model human cognition how do we figure out what humans are up to and what they're trying to do and through this I started interacting with Tom Griffiths first who's a professor in cognitive science and then with anchor dragon once she joined Berkeley and we started thinking a lot about the connection between safety guarantees for systems that have to operate with humans but also having a better model of what humans are up to and what they're trying to do and through this I got exposed to problems like value alignment and through the class at Smith I mentioned where some of us were there and we were having you know reading papers and having discussions about the different aspects of you know and safety considerations for AI systems I started to get a bit of you know a longer-term perspective where you realize that yes these systems become increasingly capable and they start to interact with an increasingly complicated world it's going to become more and more difficult to analyze them or to to ride easy guarantees about how they're going to behave and so I think that was the the angle that that motivated me to start looking at the longer term problems and I think some of the you know some of the concerns that you have there is well it seems and perhaps we will talk about this a little bit more later it seems that some of these fronts are very vague by construction because we're talking it we're trying to give guarantees about technologies that are not quite here yet and there's a lot of unknowns about what they're going to be like so I think going from the you know coming from the perspective of safety guarantees where you're always trying to give very specific to make very specific statements about your systems and what their structure is going to be how they're going to be constructed and projecting those forward in time to technologies that we a lot of uncertainty about it's one of the one of the you know big challenges that you face when you're trying to to enter this field right thanks at me and finish it up so I came into this field I think first from the perspective of like effective altruism and like wanting to like do something that would be useful for like the long-term future so I think it was like five or six years ago at the end of my undergraduate degree I was like wanting vaguely to do like something that was like useful with my like computer science education and I started like googling and then ran across this research area and started reading about it and we sort of like just read about it and like digested it became fascinated about the the like problem of AGI but it took you know like a bit of time to like digest that and also for I think to like get some more like clarity around like what the arguments were and how they related to like current machine learning so the things like super intelligence and like the concrete problems paper and some of the like early sort of like machine learning concrete implementation like let's look at this problem in like a concrete system kind of work came out and then it was fortunate enough to get an opportunity to do an internship with Owen Evans at the future of humanity Institute and work on a like technical AI safety paper that was in like the the context of like Atari reinforcement learning systems but like trying to like build something that could learn like that could sort of like learn constraints to like keep an agent from doing something that was like catastrophic during the training procedure and like I think this yeah like from there I ended up applying to the University of Toronto and being started working with Archer gross and was very was lucky to like find a place where there were were there like several professors now who are actually at this conference and are interested in the concept of AGI and looking at things from that perspective so in terms of things about I like what I was worrying about when applying to the program it was sort of like is this like is this something that like I think I started off being like is this something that I can like actually contribute to this all seems like very vague and abstract and like difficult like conceptual work and then seeing some of the like concrete research that was done sort of helped with that and then I guess just like whether I could I would be able to like go into somewhere like academia and be able to work on problems that were like motivated by AGI and be like you know yes this is like my motivation that I'm like thinking about these systems that like we don't have today whether that would like be accepted or like perceived to something weird great so I think one of the things that really came out for us in discussions in preparing for this panel and seeing what we really wanted to share was that we all feel like we've had some interesting experiences of what it's like being a student in this area and we've talked with a lot of people about the importance of attracting new talent and new researchers and thinking about what we're going to be you know where does the next generation of AI safety researchers come from and so to that end I thought it would be great to get some examples from the panelists of basically challenges that you faced in this area and also the benefits that you faced from being here what have you found from working on this problem that has been positive for you and your experience so maybe we'll reverse the order from the previous one and we'll start with William on that one so I think in terms of challenges that I faced I think one big challenge is that like again the the front of trying to look at sort of the long-term of like systems that don't exist and also like being in sort of like a field that's at an early stage of development where like we sort of don't have like the amnesty of AI safety like the problem that's like the concrete and well-defined and we know that if we like do really well on this problem that like we would have like solved like AI safety something I'm it feels like I'm more at the stage where I have to figure out like what are the problems that we could solve and and like defining those and it also is you know feels like I didn't sort of like come in and like you know have like you know find find a professor who was like this was a here's this like concrete research project to start out it feels like sort of like we're all in kind of the same boat of like trying to like defy in the field and so like yeah figuring out how to define things and then like also like being confident that like their research that I'm doing is like useful rather than just going off in like some weird direction has been a challenge I think in terms of yeah like things that have helped me are like being able to work with the sort of part of the a safety community that's not within academia or like not in like the traditional academic setup so for example for my research now that's trying to look at iterative amplification I've been like collaborating with researchers a lot with like andreas to Muller and trying to get some you know having some like direction from like that part of the field in like shaping my research to be like useful to them and being able to learn from their perspectives so yeah I think and this is probably something that essentially all of us have in common but definitely the fact that the questions that we're trying to answer are often you know sort of high level and at the beginning feel a little bit vague because there are so much uncertainty around the you know the systems that we're trying to prove properties about and I think that it's it's it's important and for me it was a bit of a chance to reconcile the kind of work that you've been doing so far and the sort of philosophy of your own research that you're trying to maintain with the fact that you're now trying to apply it to a field that it's very much in its early stages and where you are way trying where you're trying to make statements about something that is still in a very in a very fuzzy state and I think that this also there's also projects onto something that I think needs to start happening a little bit more consistently which is in general we as a field to to gain academic legitimacy we need to start lots of stuff that's not fair we need to focus more on providing concrete and tangible solutions to problems and speculation is this course necessary and it's important given that we are at a very early stage but it it can't be a substitute for concrete results concrete solutions theoretical results I think are best at this stage and they need to be general enough that they will admit you know the variation that that we still don't know about in that and that you know well we will sort of gradually discover as we move forward into the future but they can't be too generic and you know to the extent of being trivial statements or essentially not useful ones so I think that was a struggle that I had at the beginning sort of trying to focus not only on concrete problems but concrete solutions that I could give to those problems I think we've done a fairly fairly ok job I think it's a as the technical side of the field at defining concrete problems but I think the solutions also need to become creed and that sometimes there's a bit of a challenge and personally I found that having the try group emerged at Berkeley was very useful because in it gave me the ability to start grounding my my thoughts and bouncing my ideas off of other people who were also thinking critically about these sorts of problems and that was a very useful thing to have although I also have to say on the other hand I think a good thing that China has is that every person who is a member of chai is also really a member of some other lab that it's doing technical research on other aspects of the field and I think keeping our work grounded in the field of AI robotics control theory is also important and really is something that hopefully will help establish a geotechnical ági safety as a as a field of engineering great almani okay so yeah I was just like I want to I know that the fact that in terms of challenges we remain a lucky generation compared to maybe Stuart's generation or people before us if I really should be easy sorry Stuart yeah thank you thank you for making it easy fun yeah so we have like for example you can be like out of the field not even in ml and stumble open a lot of Internet resources so it's not gonna be a YouTube video I mentioned YouTube like podcasting and also we have a receptive community now we can always our work and and by by being concrete solid just like skip the concreteness part of my and and yeah well I mentioned earlier those like computing people doing computing and learning as a natural science we also have role models on that like Josh Tenenbaum or like Yoshi Benji a bit of that and Leslie valiant nation and C Lynch those are like those are people who are already preparing the the field for us so so so we are lucky what what still remains are the taboos there are still some taboos among our peers machine learning researcher is very senior very respected ones and just like like just like remind then house taboos slow down progress so if we look at history parts of the world was stuck when it was put in a taboo on philosophy and natural philosophy and the Islamic world for example somehow emerged by just giving some freedom to a bunch of blue sky philosophers in Baghdad doing doing philosophy Natural Sciences but then as soon as the Islamic world started as well taboo in them and like burning the books and and and and and and doing censorship its non-core good stock and stopped making progress and then we are like now it's just like accelerate at the end of them so so that boost slow down progress there are still some taboos but I just like want to highlight one thing people who are making those taboos there and trying without being malicious sensor in the kind of AGI safety research are not necessarily bad don't necessarily have bad intentions so just like I want to conclude my this Moroccan program they say like you have the stroke is bird with a long beak we say that when the stroke wants to kiss the baby the baby loses the eye so I think some of our peers and senior people just want us good things like they would like they wanted to secure careers published in Europe's and ICMA papers which we should do an issue trying to do but out of good intentions sometimes you can slow down progress by putting some taboos I think we should keep working hard on getting rid of those taboos so that we make progress in science challenges so I'm gonna kind of focus on the reverse I didn't think about advantages so like I made the vagueness kept coming up it's true that these problems are really vague but I also think on the other hand because they're big they're very interesting there's a lot of conceptual work to be done and it's not like an advisor can just hand you a super concrete problem but I almost think that's a good thing because like everybody learns how to solve problems but picking the problems kind of the hard part and then I also think that thinking about the long term is a really good way to think about problems so people often say you want to create solutions that generalize across tasks but I also think there's this sort of generalization across time like as there are more resources like systems have more resources with the solutions that we have now be as robust later on and so I mean I personally am very interested in both the short term and long term both just because I think short term is important and also because I don't really trust things in the that are for the long term but don't also have some kind of short term payoff and so I like to think of that generalization is how I can make my research more robust along that timescale and then thirdly I think that this community I actually like a lot like often in fields where there are a lot of competing paradigms people's egos really get ruffled and so on I think maybe because of founder effects of the field like an advantage of having a lot of people who are just sort of like deferred to the logic of the argument is that you can have widely varying ideas and a lot of speculation but still like coherent discussions yeah great and I really want to echo that point about creating narratives and being able to tell students about what feels good and what is exciting about being in this area I really think that we all want to think about like how do we get the next group of this panel up here so that there aren't three people from Berkeley right well how do we sort of make it so that there's a much broader set of folks feeding into this diverse and important research area with that I want to open up the panel to questions from the audience and in particular I'd like you to you guys to focus on questions related to technical research directions you think are interesting for us and in particular I want to call on students first because this is the first panel this is student panels so you guys get first first go guys so I want to put forth my opinion and see what you think about it I think on the we should probably focus at least a decent amount of resources on very short timelines not because they necessarily likely but because we don't have enough information to completely rule them out or rule them out like you know with sufficiently high confidence and I think in very short timelines reward modeling is basically the best approach because it really fits in well with what everyone is doing in deep RL right now and that's sort of the the justification that I gave myself for signing off on that paper that I did with you on where we say like this is the the highest impact or like the the most important thing to be pursuing right now I think it's just because you know in the short timelines we have some chance that this would work even though I think it probably won't on there's still some chance we kind of get lucky and it turns out that AI is just learn the right concepts and like don't you know end up learning to be corrigible and stuff like that is actually pretty easy so I think till we have maybe you know a whole like team working on that which is sort of what yawns trying to build I think that's what I think I would tell somebody who's like asking what should I work on right now in alignment and I'm curious if you guys would agree or if not why I was using my natural philosopher hat but actually in my day time unlike a very engineering oriented person and I think something this community in particular has to do do better at is that sometimes we have a say eye safety problem X for which there is a motivation and technical formulation a and motivation and technical formulation B a is short term and B is long term always prefer a first at least to convince other researchers that it's not because like the solution would work for both I'll just like give two examples like poisoning we have it today anti vaccine propaganda on social media we have like yeah well poisoning against democracy and I like safe interoperability okay them to take safe in terms of E is also post post problem we have now with addiction to source of addiction to social media is a safe Finch of silly people we're like we're not able to switch off Instagram so always keep this and and the good news is that those are areas on which I personally with a bunch of friends can have made significant technical progress which is good to know and like to build open so I think we have to keep an eye on like both short term long term and don't like always go back and forth between the between the two yes so on that note something I kind of forgot to say is I think that's actually another appealing property of looking at reward modeling is that it is something that's going to be useful for all sorts of examples of neuro alignment and like already people are really starting to recognize much more broadly in the machine learning community that we can write down reward functions for really basic tasks that we care about getting our systems so if I can pick up on that I so I think I agree I certainly agree with some of the points that you're making I think reward modeling is important it's actually important not only if you believe that there's a good chance of a short timeline for extremely advanced or AGI systems but also to the extent that we are producing increasingly capable AI systems and systems that are in you know interacting more and more with humans even you know during the ramp by which they start to increase the kinds of tasks that we can assign to them I think it is important that they are able to be flexible in interpreting what it is that the user wants and you know so anka has been doing some research where you have robots that are helping you in the home there nothing close to super intelligent but it is important to develop good methods for these systems to you know to work well to interface well with people and to figure out what it is that people are trying to tell them and a very important thing that that comes up when you're trying to do this kind of work is that these systems need to be cognizant of the fact that they probably don't have a perfect model of what the humans trying to do and this is I think it's something I'd still going to be true once we get to extremely high levels of intelligence and so giving these systems the ability to be humble when necessary and to acknowledge that maybe they're not quite sure what the human wants I think it's going to be a very important part of making these systems robust in the not only in the near and medium term but also in the long term um are you putting a forward as something in addition to reward modeling separate from word reward modeling a reason why reward modeling is important so what I would say is that reward modeling needs to have this humility component in it and any sort of reward modeling that would allow a system to ultimately believe that it that now it has it now it has a good model of the reward and I can just go forward and essentially you can all the human from now on I think it's going to be a very dangerous system you know it's going to lose of course their courage ability property um I don't know if you wanted to pitch in as well as I don't want to answer so I guess I think I've been seeing like one of the things that this is thinking about at this conference I've been seeing like more parallels between that and maybe like the AI services approach and the iterated amplification approach and like how they kind of fit into like somewhat similar models with different angles and thinking about that and I think on some of those like shortened relationship things I think like this sort of like doing a reward modeling better I think is like also like the better more scalable solution to some of the like near-term issues that people are talking about today so I think if I if I look at something like fairness I think that like the the wrong approach is to go and like write down what is the one equation that like it defines what fairness is and like let's make a system that like does that and the right approach is to like or like the approach rather the approach that's going to like scale better in the long term is like figure out how to learn what fairness is and how to make trade-offs in this like complex space of like what that value is please yeah and I think reward modeling is really important but it seems to me like a thing that we need to do to make progress is to understand what like make this reward more structured in some way and I think maybe to draw maybe we need to draw cognitive science insights more or something but there's like this extreme of building in the fairness function to learning it completely and there's somewhere in between and I can't figure out how to get to that in between yeah that I think yeah I'm curious what you guys think about ethics review boards or some analogous structure or like what is wrong with ethics review boards that we shouldn't have that may be something like the standard section of every paper that is ethics acknowledgement section but I just think I want something in machine learning to make it very default and normal for everybody to think about the ethical and social implications of their research and I'm curious what you guys think is the best way to do that I think like the the idea of making a sort of standard to include information on like things like ethics or I would I would think of like scalability to like long term systems just like being clear in your paper like whether this algorithm like you have thought about fairness or whether you're just like you have not thought about fairness at all or whether this is you this algorithm you have like thought about how it will scale to like term systems or like you haven't thought about this and so you're like you should not apply this in a case where you're like you know you're using like a vastly more capable system like you had when the paper was written I think given how how internal review boards normally and you know work in universities this seems to be something that usually comes out of the researchers themselves the university sometimes you know can can establish requirements for certain types of research for example every time we do an experiment where we have human participants we have to go through the university's internal review board and there we do have to do all the considerations about you know how is this affecting the participant and also what are the sort of positive implications that we're hoping to get out of the out of the research for a general machine learning research I think this is not usually the case and if if it is to become the case I think it probably at least in its initial stages I would have to sort of come out of the research community probably not only this I certainly think that it wouldn't hurt to have more to have it become a more extended thing for 4ml researchers and AI researchers in general think about the technical implications of their work and actually and include it as part of their part of their paper similar but I think I think of this in a similar way to how I think about the replicability and you know the open sourcing of the code that you used and giving all the all the meta you know the hyper parameters for your experiments I think it's it's a matter of good scholarship almost and it would be great if it became part of the of the research culture absolutely alright I think we're gonna have time for one more question is that what you what you said yeah are you concerned about external pressures from academia and from publication on your epidemics related to research prioritization and if so what can be done about it I have a negative answer but maybe somebody wants to start with a positive one I have a positive I think that yeah the the the model of peer reviewing and convincing your peer like it remains one of like the worst sorry the least bad past it's like it's like yeah the least bad more than we have so far is this I think we should keep up like for now I think it's important to go fight in the peer review and convince your peers and go to ICML and nerds and get your paper accepted and present it etc it's still pragmatic because this is how you'll get the workforce joining you on that problem this is at least how it works it worked for us while we are like like almost getting like starting out of the blue are not part of the chai or M like older like the famous AI technical safety research groups like Dylan said that we have to work more to make this panel containing a more diverse set of people not all like three from Barrie it's a good thing like you were pioneers so why not like you should you should you should keep on being leading the research that we request to answer max is concerned that we have to like we have to connect what we are saying now to the action items I think one one of the things that that talk to my heart a lot is we have to as well as you say it we have to get single groups less over-represented here when you're talking about Berkeley we have also to make sure like in technical AI safety there is a ratio of out of academia researchers that you can't find and I don't know chemical chemical reactions like organic chemistry or or whatever other field of research you would have a high ratio of people from academia and I think that's very healthy because it allows a lot of freedom for example the some things you can't really whatever freedom you can get today and I'm aware that some industrial groups give a lot of freedom for to to their researchers there are some things you just cannot do if you're not in academia I personally want to collaborate with social scientists I'm trying to do that with some colleague from school and I think if it was from an industrial group you just like not talk to me because like yeah yeah you have data on people and and what's what do you want to do with this research with me and like what what what I'm going to do for you so there are some things you can't start outside outside academia and we should make sure that something like people say discern for for AI research something on that scale should should be created and generously funded so that a generation of like of people like us could go there and play the multiplying role that that academia plays because like this is where you can multiply the workforce on on interesting science not natural science topics oh great so I absolutely agree with what you're saying that was gonna be my negative I'm so essentially so no I'm not concerned about peer review I believe very strongly in peer review and I think that one of the worst mistakes of this community could make would be to turn its back on the rest of the academic research in artificial intelligence and machine learning and automation science I think that we we need to become ingrained and we know well-established as part of the general effort in the development of AI and I think that the the right way to do this is to convince our peers to to have a you know an argued well argue debate with them and you know I don't believe in backdoors for this I think we should we should we should enter the you know the academic publications through the front door of peer review as much as it seems because whether or not your research is short term or long term is not based on the motivation section of the paper it's not based on whether you like cited boström in the introduction and in fact I feel like just as a computer in paper it's kind of strange for you to be like our goal is to get systems to see it's also kind of strange to be like when our goal is to have AGI and then your paper is about safe exploration in RL or something and so I think that there's a lot that you can do about training that makes the publication process a lot easier also I think that like I personally am skeptical of long term solutions that don't have some sort of short term output like not that like right now there needs to be like a solution to fairness that comes out of your value alignment problem but it should be like related like I don't really quite believe that all of a sudden superintelligence happens and then there are these new problems that are totally created I haven't yet come up seen a problem that doesn't have some sort of soft version so I think that actually the peer review process is almost like a regularizer to reality yeah there are probably some ways in which it could influence your research direction a bit but I think they're a bit overstated yeah I want to maybe just maybe like slightly disagree and say that like I feel I think we need spaces where we can think about research priorities in terms of API and have people who are like supporting that and to collaborate with that and I've been lucky at Toronto to have like other people who are like also like thinking about thinking about these problems from this perspective because I still think that there there are problems that like we might miss or we might approach in a different way if we're thinking about short-term things now I do think it's important and in my work I'm trying to do things that we can like test or like do some version of and like try to like get more information about today but I yeah like I I yeah I don't know if you would have generated the same kinds of problems solutions that I'm like working on if you would only been like thinking about short-term things but like that being said I think it's it is important to like be able like in in in in you know in most of the scenarios we would think about like a GI going well at some point like a bottle there's the thing where we need to like convince the research community of our ideas and like that needs to happen at some sage so we'd like do you need to engage but I think yeah we need spaces where we can say like you know we're choosing research priorities in terms of like AGI and have people supporting us Smith I want to invite you to respond to that quickly I mean I don't disagree I think there's it's sort of like there are different processes for generating ideas and you could start from the short-term and think how can I make sure this works in the long term you could start from the long term and think how does this conquer ties to the short term you could think about both simultaneously and it's probably good to have different processes right in simplicity yeah yeah yeah and in any case I think having some form of quality control it's always a good idea yes and nobody wants to be doing research you know in a cellar without talking to anyone that's probably going to lead to diverging into some yes less useful result great so let's thank our panel for a very interesting discussion [Applause] [Music] you
Info
Channel: Future of Life Institute
Views: 1,501
Rating: undefined out of 5
Keywords: artificial intelligence, agi, AI
Id: j-xQGHEbWH0
Channel Id: undefined
Length: 40min 53sec (2453 seconds)
Published: Wed Jun 05 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.