Rage Against the Machine: The Good Fellows Discuss AI | GoodFellows

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
regulations are really only put into effect after something terrible has happened that's correct if that's the case for AI and we only put in regulations after something terrible has happened it may be too late to actually put the regulations in place the AI may be in control at that point [Music] it's Wednesday April 19 2023 and welcome back to Goodfellows a Hoover institution broadcast devoted to economic societal political and geopolitical concerns I'm Bill Whalen I'm a Hoover distinguished policy fellow I'll be your moderator today joined by our full compliment of Goodfellows that would include the economist John Cochran the geostrategist lieutenant general H.R McMaster and the historian Neil Ferguson Neil I mentioned you last night out of spite but because I want to wish you a belated happy birthday my friend thank you very much indeed I have now entered my 60th year this is troubling uh the good news of according to Wikipedia you're not 60 years old but I have entered my 60th year I will be 60 and 364 days but when you turn 59 you should be honest with yourself you are now in your 60th year well let's see I think we have three people on this broadcast who've already passed out Milestones who are give you plenty of advice on what to do when the odometer passes zero uh two topics we're going to explore today we want to uh do a segment on the recent League of Pentagon intelligence the significance thereof but first we're going to talk about artificial intelligence more and more Staple in The Daily News topic we haven't really addressed at Lynn since we had Tyler Cowan on the show last November I'm looking forward to this because I find it to be a fascinating topic but also because I think that John and Neil do not necessarily see eye to eye and all things AI that means HR that you get to play the role of Peacemaker or Troublemaker depending on how much trouble you want to ferment here uh Neil let's begin by referencing the very terrific column you wrote recently for Bloomberg the very catchy headline the aliens have landed and we created them um to those who are watching or listening to this broadcast cast you haven't already please track down this column and read it it's a very insightful look into the pros and cons of so-called large language model AI produced by the likes of chat GPT you know this passage caught my eye quote how might AI office not by producing Schwarzenegger like killer Androids but merely by using its power to mimic Us in order to drive us individually insane and collectively into Civil War sounds like what you're suggesting here Neil isn't so much the end of mankind but a very slow strangulation of culture courtesy of this technology well let me put my uh cards on the table I I'm with Elon Musk on this I think this is a far more dangerous uh technological Leap Forward uh than is generally realized that that's not because I'm a Luddite because there are clearly things that artificial intelligence can do that are great and I'll give the example that got much less coverage than chat GPT deepminds Alpha fold uh which is able to determine the structures of proteins uh in ways that we simply couldn't do using our own limited brains the thing that worries me is not that the thing that worries me is a particular form of artificial intelligence which is the large language model and these large language models are getting larger and more powerful at a truly astonishing rate if you thought chat GPT was amazing and maybe you played with it as I did when you get to play with GPT E4 which I haven't yet but I know people who have you are going to be even more astonished why because it can mimic uh us it can mimic human intelligence with uncanny Precision my friend Reid Hoffman is a big fan of this indeed he's a backer of open AI which is the company formerly the non-profit behind uh gpt4 and he gives an example in his new book where he asked uh the uh gpt4 he asked the AI you know could it mimic Jerry Seinfeld that said how many restaurant inspectors does it take to change a light bulb answer the question in the style of Jerry Seinfeld and it did with extraordinary Precision what's the problem here it's not Skynet it's not that we're about to enter the Terminator movies uh and AI uh enabled robots uh with uh with Schwarzenegger muscles are going to be roaming the streets trying to kill anybody who might in the future resist Skynet that's not the thing that's going to happen the thing that's going to happen is that these large language models are going to be so good at mimicking us that they're going to drive us collectively crazy uh if you thought social media drove us crazy in 2016 you just wait and see what the large language models can do in 2024 and this is the thing that we should be concerned about I mean there are apocalyptic visions and in the article I cite the most apocalyptic one which I think is worth name checking because because uh elizir yudkovsky argues that if we allow AI smarter than us we shall all die um I'm not going to go as far as that I'm just going to say that if we create intelligence Superior to ours that can mimic us remember it's not that it's like our intelligence it just can fake it it's intelligence is completely different from our intelligence it works in a completely different way we should really call it artificial intelligence which has got it non-human or inhuman intelligence but if we create that it is going to have absolutely diabolical effects on politics and public discourse generally and as I said at the end of the piece it'll it'll look like raskolnikov's nightmare at the end of Crime and Punishment when he imagines the whole world just going insane and tearing one another apart that's what I'm concerned about gone well I think everybody's lost their minds on this one I met a huge booster finally uh we have a technical uh some something to boost the economy It's a Wonderful tool I think we should remember um no pundit has ever forecast with any accuracy what the effects of a major technological transformation like this would be that's not true that's not true or Orwell correctly foresaw the consequences of the nuclear weapon in 1945 in an essay that he published after Hiroshima Nagasaki which he said this will transform the nature of geopolitics by creating a cold war a permanent peace that is no peace so you're wrong John and it's very important to nail this because in many ways AI is as dangerous as atomic weapons sorry to interrupt you but it's so impressive no it's good too I like I like treating each point in isolation I had in mind the printing press the steam engine and the computer uh which were this kind of transformational thing which I do believe is transformational that nobody at the time had any idea what they were going to do and they both Unleashed good and bad things but in the end tremendously uh good things for all of us uh this is I think Neil was exactly right to push back uh is this so far not intelligence it is a mimic we've all been hearing the robots are coming to get us for decades now and they always seem just a little bit ahead but this is for the moment a mimic and a tremendously useful tool that creates very inaccurate but often inaccurate but lots of language it is only a large language model uh for the moment there is no crisis uh there's no moment when you know we have to put this in in the bottle now because otherwise it'll go out what are people worried about with large language models mostly misinformation as I think the Catholic Church might have been worried about the effect of the printing press if they had had the chance to regulate it when it came out uh this is now the regulate internet is really right now about censorship and and you know what's going on with the the regulatory state that wants to censor anything uh there will be all sorts of volatility the thing that caught my eye most recently in talking about this is that people are figuring out as they quickly figured out how to manipulate Google search rankings manipulating the AI so that it will you know when somebody asks who is John Cochran the AI will answer John Cochran is the world's greatest Economist uh manipulating the AI is going to be the game next game and yes it is a way to to um it will if you're worried about spread of misinformation it's going to spread a whole lot of it just like the current Internet us but uh censorship which is what was really on the table uh is because this is right now a language model it's not the answer and the other part is people say we need to stop hey who is the we the Wii is our current regulatory statement do you think that the capacity of our current regulatory state is that the same people run the FDA the CDC and the Federal Reserve are are going to be able to judiciously pause AI in just the right way and not turn it into a massive attempt at political censorship and I would add last uh China's not going to stop developing AI I think we are on the road to the same disaster that and now if I get the history wrong here and Neil will criticize me but I'm going to try the story anyway the Chinese emperor who famously said no we don't do ocean-going vessels that the Portuguese have it there's no way they're stopping AI so I think we're getting way way ahead of ourselves on this decades-long uh prediction that the robots are coming to get us and we have nothing like that we have a very interesting tool let's get let's get HR on the conversation here HR Atlantic recently ran a column where it called AI quote the third revolution in Warfare quote First there was Gunpowder then nuclear weapons next quote artificially intelligence weapons do you agree with that and do we see any signs of AI at work in the Ukraine Russia conflict yeah they're I mean artificial intelligence as you know is a range of technologies that that are that are combined uh to to uh to achieve you know really um you know machine learning uh autonomous kind of learning as as as Neil has mentioned the the ability uh for these large language models to learn and get better uh they're also you know they're also uh technologies that allow you to to do big data analytics to sift through and and to synthesize vast amounts of material that otherwise would still remain fragmented and and uh and to therefore achieve maybe a higher degree of understanding in combat they also allow for automated decision making uh for example ability for for image recognition and classification of targets tied to communication networks uh that then allow you know for semi-autonomous application of lethal force and so these are all really big causes for concern I would say that you know John's optimistic about the uh you know about Ai and I'm going to break you know my role here and be a little bit more pessimistic with with Neil I mean I believe everything John said in terms of the possibilities but I remember you know what was said about the internet at the outset of the internet and how it was seen as really an unmitigated good I mean how could it be bad and there wasn't really an anticipation of the degree to which you know we would become more connected to each other than ever electronically but more distant from one another than ever you know socially and psychologically and emotionally uh and the effect that you know the internet would have on on privacy for for example and uh trust in individuals that agree to which the competition associated with really beginning with the internet for especially advertising dollars led to algorithms uh that that were designed to get more and more of those dollars through more and more clicks and to get more and more clicks through more and more extreme content uh that has polarized us and in many ways pitted us against each other so I think with the new technology we just have to think okay what are the possibilities but also what are the dangers and then and then what can we do to mitigate the dangers now the AI is not going to kill us it's good it's going to be it's going to be people employing AI uh that kill us and and I think the competitive nature of AI adaptation whether it's in in War uh or whether it's it's a commercial and and other applications of it is really what gets us in trouble oftentimes and the race to be best in the race to outdo the other who's employing these Technologies there are often decisions that are made that I think are dangerous in this case to to privacy to our ability to maintain any secrets which we're going to talk about later uh I think can can uh can be an assault on the trust you know the trust that binds us together as a society and the cohesion of our society and maybe even uh play a significant role in the extinguishment of human Freedom as as artificial intelligence related Technologies already are in places like China and where China exports you know this uh these Technologies like places like Zimbabwe for example so you know there's a downside uh for sure and I think John I don't think you disagree with that uh but but you know that doesn't mean we shouldn't take advantage of the opportunities but we have to be very I think conscious of the dangers please get HR great question this is an honest question I would thought you would be just just chomping at the bit on the possibilities here especially for the us our military for 50 years has been ahead of every single technical Revolution that's in that's how we've become so spectacularly good in military Affairs uh certainly a free open AI didn't come out of the great Chinese industrial policy on AI it came out of America I would think you would think we this is a tremendous advantage to us not so clear it's an advantage to the offense versus the defense and and the the AI can pierce the fog of War you know as we'll get to in the intelligence thing the problem is how do you integrate all the intelligence how do you figure out what's going on I would just be salivating as AI as the the intelligence integrator that can that can you know you're sitting in your tank running across the desert the screen comes up and says oh we've put X together we finally figured out where the opposing tanks are we know exactly what you need to know and and lift the fog of war and and to he who gets there first this ought to be great news we're trying to it it is in that connection and this is this work is already is already ongoing in terms of accessing you know a broad range of uh of sources of intelligence including the vast amount of Open Source data that's available now I mean it's it's astounding the degree to which imagery intelligence which used to be almost exclusively classified is is available to everyone but as you mentioned it's really combining that intelligence with signals intelligence uh with open source reporting skimming of social media uh human intelligence and and a vast number if in Wartime oftentimes interrogation reports that can tell you uh more important information about the enemy it's important uh to to establish AI can help establish patterns so you can identify patterns but even more importantly uh in Wartime it's to to anticipate pattern breaks or to see Behavior that's different uh from the pattern that's been established so I think all this work is ongoing there's some really innovative companies some that you haven't heard about here uh in in the valley that that are are developing some of these analytical tools that are really quite astounding and they apply to war but they also apply to other other uh other problem sets like natural disasters or wildfires and so forth uh allowing you to anticipate um really what you have to do to to mitigate those disasters well in advance and to give you the advantage of seizing the initiative which which is really what you want to do in combat right you want to you want to seize the initiative through gaining surprise through uh through temporal Advantage by imposing a tempo of events on on the enemy to which the enemy cannot respond uh the the range of artificial intelligence related technology is important for that and you know an area that wasn't talked about enough is it's also revolutionary revolutionizing logistics for example you know and and I'm thinking of the work that a company here um is down the road is doing with the Department of Defense to anticipate mean time between failures for component parts and and to eliminate phase maintenance and go to a much more efficient maintenance model to anticipate Logistics demand and and to to be able to manage Supply chains in a much more effective manner so there are all sorts of ways that artificial intelligence is already I think uh changing the character of Warfare no but just don't know stop stop stop Professor Cochran you have to stop you have to stop because I've got to make two really important points here uh first of all uh what's critical in in hr's domain of warfare is the point at which the AI makes the decision to shoot that's right and now we have said that we're not going to allow that to happen that the US Department of Defense says that AI will be used to assist human decision makers the critical question is as John has already suggested what about China and other adversaries but particularly China which clearly is the closest in terms of AI capability if you read the Eric Schmidt Henry Kissinger book on AI AI one of the most striking points they make is that when you observe how AI plays chess you realize that it thinks very differently from a human player if you will unleash it on the battlefield rather than the chessboard you might find that the the costs of War not only to the adversary but to oneself much higher than would be the case with human commanders in chess AI will sacrifice its own pieces to gain strategic advantages that the kind of military command that we want to enable so that's point one point two in the modern battle space as we see very clearly in Ukraine there is no clear separation of the battlefield from the home front because uh disinformation and misinformation are play a very important role in maintaining or eroding morale now when Reid Hoffman asked gpt4 to list the possible harms of empowering large language models the third that it came up with was the one that made me sit up and pay attention let me quote the ai's answer to the question what would be the potential harm of empowering large language models quote manipulation and deception large language models could also be used to create deceptive or harmful content that exploits human biases emotions and preferences this could include fake news propaganda misinformation deep fakes scams or hate speech that undermine trust democracy and social cohesion so don't take it from me take it from gpt4 this is a weapon that is profoundly threatening not just on the battlefield if we Empower it to decide who lives and who dies but I think even more dangerously are in our own civilian lives we were sent kind of crazy by social media already I mean there's no it seems to me not accidental that mental health problems have become more acute amongst young people since the Advent of social media but this is a much more powerful tool than anything we've seen in the past 10 years and I think we ain't ready for the amount of fake content and particularly deep fake video content that is coming our way I mean when HR started to agree with me on the pessimistic side my first response was is this really HR or is it in fact a deep fake of HR that I set up to agree with me to win the argument on Goodfellas and that's the kind of question we're going to be asking so just a couple quick points on this on warfare well this is important in terms of the degree to which artificial intelligence uh can be used to generate uncertainty and I think a lot of people assume that because of the Big Data analytical capabilities uh and and the uh and the Machine learning capabilities that and and and the large language models ability to access all sorts of sorts of information that the future war will be shifted fundamentally from the realm of uncertainty into the realm of certainty this was kind of the thesis of the so-called revolution in military Affairs in the 1990s as well I think that's exactly wrong I mean I think actually uh this technology uh will will actually be to a higher degree of uncertainty because of the ability to inject bad information contradictory information deep fakes and as as Neil's saying the uh our ability for for Content based verification of materials is really eroding quite rapidly but John go ahead well in in war China is going to be doing it so if we disarm ourselves good luck to us there is going to be a race uh and no one really knows what's gonna end nobody knew what the machine gun was going to do to war they got that one all wrong uh and certainly you want to you know you want to be thinking about Rai and racing with China's AI but to Neil's point you know oh it might have manipulation of the poor little peasants and misinformation and spread bad news that's exactly I think what the Catholic Church would have felt about the printing press and they might have been exactly right uh you know you just that that that argument uh to disarm yourself of of this powerful new Tool uh it was exactly what would stop progress so yes there's going to be wild stuff going on and there's going to be moves and counter moves uh there's going to be crazy stuff coming out of the AI and and the average person needs to learn a little more skepticism uh but I think putting the same idiots who are in charge of the National Environmental Quality act in charge of certifying the safety of every single one of hr's uh wonderful AI inventions and taking 10 to 15 years to figure it out before they're allowed to pursue that research is absolute idiocy if there had been a John Cochran around between 1945 and 1969 presumably there would have been no conventions to limit the proliferation of nuclear weapons there would have been no conventions to limit the use of biological and chemical weapons you have to remember John that that we have in fact succeeded in restraining a technological arms race uh before and it's extremely important that we did because if we hadn't put things like the non-proliferation treaty in place there would be many many more nuclear Powers than there are today and it'd be much easier for terrorists to get their hands on nuclear weapons same goes for chemical and biological weapons so I don't think we should simply assume that that history tells us to let the technology rip because not everything is identical to the printing press I don't think AI is identical to the printing press I'm with you on this but where we are with AI it's about 1923 and we've just discovered quantum mechanics uh the the great AI War that's going to come devour humans that that that that has not even been thought of yet alone you know developed it's it's still this vague thing so yes uh when AI technology causes you know but we we will have international agreements on trying to limit it on the battlefield if it's we should be having all sorts of international agreements as we do with other very dangerous weapons but but the catastrophism that you know we have to stop research on quantum mechanics because it might lead to bombs that might get out of control that we might have to someday uh do something about that I just don't see that the catastrophism that we have to put we have to put the federal regulatory mechanism in charge of this right now to stop that possible development I think that's well way too early yes we will certainly try to contain as we try to contain all sorts of doubts I think War itself is not the terribly great thing and we have a whole apparatus of international agreements to try to to hold down that level of violence absolutely so can I just just maybe maybe something we can agree on as that is that there is because there is significant destructive potential uh from a social perspective from a military and security perspective from various perspectives then I think it is important to try to anticipate the dangers and and not regulate but maybe but come up with ethical standards or some means to to limit the the way that AI is the way that AI is uh is employed but of course you know as you're mentioning John it's going to be a competitive environment right if it's if it has to do with the application of artificial intelligence to uh to war fighting capability I mean are the Chinese going to sign up for our same ethical standards probably not you know but uh so I think we have to look at really what is in the realm of the feasible who are those who are in competition with one another in this particular application and then what is what are the range of laws regulations ethical guidelines that the parties who are engaged in that competition can agree on that's the only way I think you can really mitigate the downside risk associated with some of these Technologies I am highly skeptical of the argument that we can just wait until some future date that's precisely what Sam Altman says the founder uh uh of open AI but if you look at the exponential growth in the power of large language models we don't have a couple of decades to play with these things are going to have if not artificial general intelligence certainly greater far power than anything we've ever produced between human ears very soon indeed and finally the idea that we can't in any way constrain the Chinese is wrong because in fact we have succeeded in constraining experiments that were going on in China with human genetics but if we don't create International conventions then we have absolutely no chance of of constraining them so we really do have to do this and we have to do it fast while the US still has a lead which interestingly it has if you'd asked us a question when we began Goodfellas three years ago about the AI race I suspect we'd have said because there was reason to think it that China had a chance of winning and that was kind of conventional wisdom back then because it seemed like the Chinese had all the data and the Chinese had Computing far apart but the large language models race they really lost they're quite far behind it turns out so this is a great moment for the U.S to start setting International standards in the same way that the US was able to set International standards on Atomic weapons when it had that lead over the Soviet Union if we wait too long it'll be too late and John I was just going to ask you I mean don't you think the Chinese Communist Party fears large language models I mean I I just like to hear both your your points of view on this because you know I I they've been able to be quite successful with the with the you know the great firewall right you know you know President Clinton said you know like trying to control the internet would be like trying to nail Jello to the wall well the Chinese nailed gel to the wall pretty well you know in terms of uh using the internet for State Control uh rather than the internet breaking down some of the mechanisms of their control what effect does does it do the range of AI technologies have um on China the Chinese Communist party's effort to police the thoughts of their population and maintain their grip on power it was to use on this you know one is of course that since the it's embedded in in the neural network that nobody understands this will be a way to break through and Chinese people can get all the information they want the other is of course we've already quickly seen how um the people running these things are able to channel you know it was giving right-wing answers and they quickly changed the kinds of answers it's it's going to give so it may be a minimal uh to censorship as well and I but I want to end on a note of agreement here um so I think we could have an International Conference that would pretty much agree don't put the AI in charge of pulling the trigger uh that's the kind you know the ability to pull the plug is uh that's the how does the AI connect to the real world uh and that's a step where I think uh that's reasonable it kind of everyone agrees uh that's one that you take very slowly uh so I think I think we'll be able to I agree with Neil on the military uh side of it you have to be uh you have to be cautious but but what we'll be able to do that without putting uh our current AI development through the ringer of regulatory censorship because we're worried about the spread of misinformation which usually means one party's view of events Neil well I'll take I agree with Neil uh from John and just kind of leave the butter side in order that we can get onto our our next topic but I mean I think that the reality is that you can unleash a Chinese AI on all the information in the world without making the information available to the Chinese people that's not a difficult technical problem uh the Chinese I have actually is that you need enormous amounts of computing power to run very large language models and one of the interesting consequences of our ability to restrict China's access to the most powerful semiconductors is that they actually don't have the CPUs that you need uh and so that this is a big and important consequence of the kind of economic Warfare that we perhaps that's thrown where the economic measures that we've been using to constrain China technologically uh so as I said that there's no doubt that the US has established a lead here but the lesson of the 20th century is that when you have that lead that's the time to set the standards uh before the uh the totalitarian regime catches up with that bill I let you segue to the next topic well thank you Neil let's move on to topic number two the so-called geeky leak Scandal this is Jack to share the 21 year old national Guardsman stationed on Massachusetts it's Cape Cod accused of posting top secret National Defense information on of all things a social media platform uh presumably to impress his gamer buddies as a result of his actions Mr deshara is looking at up to 15 years behind bars HR two questions here to get this going one what does it say about the state of American intelligence gathering and holding on to said intelligence that a 21 year old kid who's pretty low on the intelligence totem pole can so if easily traffic in this sort of information but before that is there anything that he leaked that came out of this it really caught your eye that you found either eye opening or jaw dropping well I mean I think what it shows you is that we're pretty good at Gathering intelligence and analyzing intelligence but you know we're not very good at keeping secrets you know it reminds me of the Seinfeld show about the car reservation now you can take the reservation but you can't keep it which is the important part you know do you have my reservation yes we do unfortunately we ran out of cars the reservation keeps the car here that's why you have the reservation I know why we have reservations I don't think you do foreign if you did I'd have a car so you know how to take the reservation you just don't know how to hold and uh and you know I I think that uh what you're seeing is the results of a cultural shift in intelligence that occurred after 9 11 after the Strategic surprise of the largest terrorist mass murder attacks in in history uh the phrase in the intelligence Community became hey we need to shift from need to know to need to share need to know means that you other people only people get intelligent to people who really need to know but it was sort of the stove piping of intelligence in the various agencies that prevented really more people from connecting the dots and to anticipating uh that Al Qaeda was going to fly airliners into the Twin Towers in the Pentagon and probably the U.S Capitol was there was their plan so uh you know I I think that that now there have to has to be a corrective back I mean it is it is ridiculous that that somebody you know really without that kind of more of a demonstrated record of reliability would receive the highest clearance and and not just the clearance but then the acts access to other compartmentalized materials I mean if you're going to be a tech you might need access to systems but there have to be ways uh to to enforce you know right of least privilege and to compartmentalize and layer these the access to these kind of systems um I think really you know what's very important obviously is that that he was caught that they did the forensics on this uh I wish it was a lot more than 15 years you know because I think it really there ought to be a message sent to anybody else who thinks is okay to compromise intelligence um to think twice before doing so well I think there are a couple of points that uh arise here in addition to what HR has said with which I largely agree the first is that uh the United States National Security State uh the system with all of its different agencies uh classifies a lot of uh content uh Matt Connolly from Colombia recently presented to the Hoover history working group uh his new book on this subject showing the ways in which habits of classification uh have over the last 50 years led to a kind of classification Mania and many things get classified that in fact these days are are available on open source and so there's a sort of arbitrariness about some of the classification that's going on almost certainly way too many things are classified and the second point is of course that if you have a very large National Security State you have a great many pretty young people who were employed by it and I'm absolutely sure uh this guy's not the only nerd who would like to get uh some status uh on uh on an online chat Group by showing how much uh he knows uh we'll have others and I think it's uh it's a kind of problem in here in in the system now that there are too many things classified and too many people have access to them I'd add one final point though it's not clear to me that world shattering Revelations came out here I don't think there are many European governments who are shocked shocked that the United States uh is uh is spying on them I mean that's just not new news nor was anything that came out about the war in Ukraine uh for example casualties on the two sides a revelation to me it pretty much aligned with what we had already figured out from from open source uh information I asked a senior military uh figure not HR but someone else if there was anything damaging that had come out and he said the damaging thing that the new and com somewhat embarrassing thing is the extent to which Special Forces operators from NATO countries are in Ukraine engaged in training and I think it it's right to say that that wasn't in the news until these leaks but otherwise I don't think there was anything really earth-shattering John I would just I agree with Neil it strikes me way too much as classified and most of you know if we had a lot less classified and it were perfectly obvious that was what was classified should remain secret uh I think we'd have a less problem I think there's a sort of a self-strification this is idiotic that this is classified so why do we have to uh worry about it so much you know we want to hold classified things we're holding classified things that the public doesn't know but that our enemies know perfectly well um so and I think there's there's a reason and I think the connecting the dots is really important uh all of we have so many failures in the U.S which were not about having the intelligence it was about connecting the dots and putting them together you know think think about what happened during covid which was a similar effort to classify if you will and predict to hold information it took a nation on Twitter to put together uh in real time our masks and lockdowns working or not it in the face of you know a lot of political men but you needed you know the the Jay bhattachary is out there thinking about the the stuff and communicating to to come in real time to the right answers so um I I'm worried by HR saying we're we realize this problem and now we're going to go back to siloing um information uh and you know is the point of classification to keep the public from learning things or is it is it to uh just protect our sources and assets well when it turns into keeping the public from learning things that that's dangerous the last point then please go ahead you know this is another point for AI maybe the AI can hopefully be taught to be to keep its secrets and to put the connect the dots better we don't want to go back to stove piping information I mean fighting in Iraq and Afghanistan especially in the early years it was a real struggle to combine databases uh to gain visibility in this case of terrorist organizations and to be able to go after them effectively we actually largely did that you know we were able to bring together a range of databases and apply some like brand new analytical tools that are now kind of routine uh to go through this in a way that is really analogous to the large language model to be able to get you know make sense of all that data uh to geo-locate uh you know individuals and other important parts of information uh to make the connections between nodes and networks to understand relationships between them to see flows through terrorist networks of people money weapons narcotics precursor chemicals I mean this was all really uh focused intense work by you know Advanced research agencies as well as our intelligence professionals we don't want to give that up but there I do think that that uh you still don't you can still do that without giving access to somebody like this individual and then also you know the thing that disappoints me is you know there's a chain of command in the military for a reason every Soldier every Airman has a sergeant where was where was this guy's Sergeant you know and and where was the commander and and you know there's also a physical security implications I think this is a this is an important lesson for everybody who owns a a business that has that entails sensitive technology or intellectual property you need to take a holistic approach to Enterprise hardening in terms of you know cyber Espionage physical security Insider threats you need a layered kind of Defense uh in in place with right of least privilege and you need do you need to employ software and AI now uh to look to for anomaly detection in your systems so I think you know this should be a broader lesson that applies not just to the US government but to any company industry uh that's involved with critical infrastructure withholding people's data or with the development of sensitive Technologies and intellectual property now Neil somebody who pounced on this right away was was Marjorie Taylor green the Republican Congress one from Georgia here's what she tweeted I want to get your guys thoughts on this she wrote The Following quote ask yourself who's the real enemy a low-level National Guardsman or the administration that is Waging War in Ukraine a non-nato nation against nuclear Russia without War Powers this tweet got 16 million views Neil Ukraine remains an issue I think there's a story out this morning Kevin McCarthy and zlonski just had a conversation he asked for f-16s do you see the leaks playing any role as Congress moves ahead and debates giving Aid to Ukraine well I don't uh have a good word to say about Marjorie Taylor Green's uh tweet uh because uh defending uh somebody who uh preaches National Security uh and leaks classified documents seems to me uh something that in itself is indefensible uh and implying that there is some kind of uh racial or cultural uh Dimension to the prosecution is uh is beyond is beyond the pale the problem is uh and it will become more of a problem as the months pass that a segment of the house Republican caucus is skeptical about the war in Ukraine and growing more skeptical with every passing week and they are looking uh for material uh to work with and you can be sure that the Russian government has a strong incentive to provide material for them to work with because from an information War for a point of view Moscow wants to undermine Western support for Ukraine full stop it will do it in whatever way it can there was a term uh in the 20th century used for people who unwittingly became uh instruments of the Soviet Union and that term was useful idiot I'll leave it there picture isn't this from the person who said that there were Jewish lasers in space you know so I have a friend a friend of mine uh texted me right after that and said he had an idea that we could have a Jewish laser bagel shop and he said it's perfect he said I'm Jewish and you're General and you can get lasers for free so so uh if there's if there's any if there's if there's you know if there's anything that's salvageable from marjor Taylor green maybe we should just laugh because it is laughable she also confused the gestapo and gazpacho uh in one memorable utterance though I was told earlier today that that was done consciously in order to get uh social media attention so you know maybe not quite as as uh as dumb as she seems John I'll try to rescue something salvageable here although I am the Ukraine Hawk even on this panel and and no fan of uh where the Republican party is going on this one uh but it raises the Deep question which which I want HR to and or um how much of what we classify now is classified in an attempt to hide things from the American public and to mold public opinion as opposed as classified in order to protect our sources and methods or classified in order to protect um you know and you know the secrecy of what we need to do in war and I think there is a lot that is classified in an effort to mold public opinion and that that breeds a whole lot of distrust and that that is that is a problem yeah I think you know there's a habit of over classification right I mean I I was a big proponent for writing for release which means right down to the level where you can release it or certainly use it with allies and partners I mean oftentimes the no foreign classification was most frustrating so I was in Afghanistan you know we're we're working on sensitive topics and in a sensitive effort and my my chief of staff was Canadian and it used to just really rile the hell out of the no foreign the no foreign you know because he's running our task force right you know the chief of staff and and couldn't get access so so we came up with a classification called No can no Canada and we put on that we put on the cover sheet share with Iran North Korea but whatever you do don't show it to a Canadian you know we made light of it but but I just think it's a bad habit and then oftentimes uh John materials are classified because they're deliberative right so that that if they were released it would it would have a negative effect on the ability to communicate what your policy is what your strategy is what your actions are because you're just thinking it through so oftentimes you know you'll see you'll see you know the classification and you know pre-decisional on it and that's largely to protect it from the Freedom of Information Act uh you know so that so that it won't be released prematurely well we talk on the show a lot about lack of thrust and a lot of Americans view that they're lying to us uh this doesn't help go ahead HR a question to you when you were wrapping up when you were packing up and leaving the National Security Council how easy would have been to walk away with classified documents you know I I mean I I can't even imagine doing it I mean it's like so I can't you know I I guess if I want if I wanted to probably you know but uh I don't I I think that uh for you know for people broadly in the organization um you know there has to be a degree of trust now there are ways to to to track these documents you can have watermarks on on sensitive documents uh there are ways obviously well actually I can't talk about the rest of the stuff but there was so so anyway there there are ways to track these documents I mean they found this area pretty quickly so there are there are forensics in place that if there's a breach that people can get caught you know and then and then of course I think what you need to do is is punish them to the full extent of the law okay final question I'm gonna quickly around the heart and then we'll go to lightning round um there is a very bad pattern in this country Edward Snowden Chelsea Manning reality winner they're all individuals with two things in common all leaked intelligence all in their 20s as is the case with Jack to share but in the case of Snowden Manning in reality winner all three were hailed in some circles as Heroes for what they did a year from now gentlemen will we be talking about Jack to Shara as a hero for what he has done or is he just a sad lost man John um I don't know because I don't know what's in there but uh and Daniel Ellsberg and Edward Snowden uh did actually it did also a service to the country uh we did not know that the NSA was tracking geolocation data of every American Citizen's phone calls uh that was scandalous so uh there's a balance of good bad legal and illegal uh but um there was some good that came out of that remind me where Snowden is resident these days John yeah um I didn't say an anaphor I'm in jail and throw away the key if he comes back but he did reveal the fact that the NSA was tracking every single phone call that you Neil Ferguson make and that that is incredibly uh John John I'll tell you that's not that's not true that's not true okay that data it was being housed because there was no way to to house selective data in advance to get access to that data you had to go through due process and get a judge to allow the the to allow law enforcement not the NSA to allow law enforcement to access that that data so the idea that NSA is collecting on normal American citizens um in a way that they're you know cognizant of where you are is just it's not true it's not true Neil hero oh God no none of these people are heroes they're all deplorable individuals the only consolation I can offer is that compared with Cambridge in the 1930s the United States is not producing quite as many traitors as it might be Dan Ellsworth was the Pentagon paper as a mistake the Ellsberg case is somewhat different because uh the Pentagon papers were an internally commissioned inquest into what had gone wrong in the Vietnam policy of the uh Kennedy and Johnson administrations now uh what became the issue was that Ellsberg was the individual who on his own authority chose to leak it to the New York Times and other media outlets and I think that that case uh was then handled in ways that were inept because leaking had become so endemic uh by the late 60s and early 70s that it posed a major challenge to the execution of U.S foreign policy uh I think if one that naively portrays Ellsberg as a hero uh one misses out the important Nuance that he took it upon himself uh to uh to publish uh an internal uh government document at a time when uh the security uh of extremely high level classified documents uh and deliberations was a major problem in particular for a government that was trying to get the United States out of the Vietnam War so let's not tell uh let's not tell Just So Stories about whistleblowers uh one has to remember that no government and least of all the government of a superpower can conduct its foreign policy without some classification without some level of secrecy and if government employees think that they have a right to tell things that they find out in their government work to the New York Times if that becomes the norm I can assure you that it will become impossible uh to make foreign policy and maintain the country's security it's as simple this is very important and I want to I want to admit to being convinced by both of you uh the ability to protect what you're doing during the deliberative process is crucial I think that the leak of the Supreme Court Document was uh was very harmful to that I I see in my studies the fed the problem of uh everything's too open in a sense you can never be wrong because it's always public you need the chance to throw ideas around to be wrong to be right and I think what you're saying about the Pentagon papers was that what would have happened in an alternative world is that that study would have been read would have been thought about would have would have led to that information would have become public eventually and the government would have made a perhaps better and less chaotic decision about what to do about technology we need to protect the deliberative process not everything should be public thank you both and hey just a quick a quick personal experience on this you know as National Security advisor or leaks were a huge problem they're a huge problem when I got there unexpectedly in February 2017 the leak problem initially was mainly by people who were kind of part of the not my president movement against President Trump and were leaking you know out of the White House out of the NFC staff in ways to damage president Trump and they were leaking to former Administration Obama administration officials were putting that stuff out on Twitter but then also there were people later uh who who were leaking to damage individuals like within within the White House staff to advance their own agendas there were people I think uh you know who were leaving for a whole range of reasons but the effect is as Neil's mentioned is really destructive to the to the decision-making process and the effort to to get the president like best advice and best information uh you know Bill I'm sure you've had this experience in government as as well it's really destructive to trust and so what's what's what do you do if you're confronted with with people who are leaking and be irresponsible like this and and breaking the law is you you type you bring your decision-making group down to a very trusted small small circle and then you limit the perspectives that you have uh for important decisions so it's really it is destructive to to good governance and you know who understands this the Russians understand it because when those leaks were happening immediately the troll Farms the IRA uh the in Moscow would would magnify all of those leaks and do so in a way to try to create divisions even internal to the administration between people I mean they were very very sophisticated about it okay let's move on to the lightning round lightning round since we started the show with a segment of artificial intelligence let me ask you an AI related question gentlemen would you give me the best depiction of artificial intelligence in popular entertainment this is good or evil HR why don't you give us your choice okay I'm Gonna Give You probably an unexpected one okay it's it's uh it's it's music it's it's the track in the beginning which was track one from The Moody Blues threshold of a dream album in 1969. and it's a conversation between Inner Man and the establishment The Establishment which is like an AI voice and so the the inner man channels Descartes and says you know from uh from meditations number one I think I think I am therefore I am I think and then the AI voice comes in and says of course you are my Bright Little Star I've got piles and piles of files of magnetic ink about your forefathers and so forth I'm I'm paraphrasing um and then and then the inner man comes back and he goes I'm more than that I I think at least at least I think I I should be and then there's another voice that comes in and says there you go man keep as cool as you can face piles of trials with smiles it Riles them to believe that you perceive the web they weave keep on thinking free and so hey it's a good message uh and then it goes into lovely to see you again my friend track two on the album but hey let's preserve our Humanity as as we uh integrate this artificial intelligence into our lives well I can't wait to see how you answer the marijuana questions that's coming up HR but John your favorite AI choice but how can I top that I won't try but my favorite choice will be obviously uh Hal from 2001 to Space Odyssey open the doors Dave this conversation can serve no purpose anymore goodbye the AI that took over like everybody's worried about in the human quietly unplugged him okay Neil demon seed uh a movie you probably all forgotten about I I tell you this is revealing us as the Boomers that we are 1977 star Julie Christie and the voice of Robert Vaughn and in that uh movie the uh the AI uh not only takes control of its Creator's home but of his wife uh impregnates her to create an AI uh or part AI that humanoid I have extended my Consciousness to this house I'll leave the rest to uh to Netflix where I guess you can probably still find a demon seed okay I think the answer is Austin Powers fembots Neil what is Austin Powers say when he's trying to keep himself from being seduced by the fembots oh that's that's something I should know baseball cool challenge what up Mr Powers smoker that you're naked on a cold day those scripts are great uh comeback Austin it's a high tying Mike Myers dusted down his rough and wig and gave us a new edition of those wonderful movies next question HR today April 19th is the anniversary of the battles of Lexington and Concord and the so-called shot heard around the world I'm not going to ask you what is the most important battle in American history let's take it from a different angle what is the most underrated or least appreciated battle in U.S history in your opinion gosh oh my gosh all right uh well I mean I'll just stick with I'll just stick with the Revolutionary War and yeah I would say it was Saratoga I mean it's probably not underappreciated as much as it um I'm trying to more underappreciated one I've met how about Trenton and Princeton uh which really turned the tide of the of the of the War uh I think uh uh very few uh generals uh commanders would have made the decision that George Washington made to attack at Christmas uh and and uh to do so when his army was was almost you know about to disintegrate uh from from in the running out the terrible conditions at Valley Forge but he made he took bold action achieved surprise uh and demonstrated to the Continental Army that they could achieve Victory so I'll say uh I'll say Trenton Princeton okay tomorrow's April the 20th which is of course 420 day across the world in celebration of marijuana consumption what is the official Goodfellows position on legalization of weed John well I'm a Libertarian uh pot is not good for you especially modern pot I think we've all I've certainly seen friends who kind of descend into into depression and pot but uh when you take pot you don't uh you don't hurt other people the costs of the war on marijuana jailing a generation of of young especially minority kids are just outrageous so I'll go all out not only should uh pot be it's not just because it's not good just because it's bad for you doesn't mean it should be illegal but the costs of trying to stop it and and what it does to you know organize crime are just horrible I'd go further the uh here's an outrageous one for you the the Health and Human Services should subsidize the development of fun recreational non-addictive drugs why because you know not just pop but fentanyl is is killing a lot of people so give people what they want if they want to waste their lives on drugs you know in some sense fine with them but the cost of what we're doing now uh uh both the judicial system cost the incentivization of organized crime the destruction of the inner cities are just outrageous Neil I agree with John entirely and HR your pro or con legalization and more to the point you're a deadhead how do you avoid getting a contact high every time you go to a concert well luckily I have some uh a friend a friend in the band who and I'm backstage and and largely you know um you know uh upwind uh from uh from from so you know I would I would just say um you know I don't think we know what the dangers are yet and I think there's been a rush to decriminalize before understanding of what the negative effects are uh you know I would place it maybe in context of the broader effort to decriminalize drugs overall and we know for sure that doesn't work and Oregon is a case in point uh for that so anyway I you know I've not studied enough John but my my inclination is that to say we haven't we haven't looked hard enough at the negative effects of of long-term marijuana use economic note John Cochran Americans spend 30 billion dollars a year on recreational marijuana they spent 18 billion dollars a year on craft beer and chocolate combined well alcohol is clearly just as bad for you as marijuana probably cigarettes are worse yeah probably worse for you and uh I have a preference for alcohol over marijuana but I don't see why the law should discriminate in favor of my drug okay people are drinking a lot less Bud Light though I hear is that alcohol really I don't know not not where I come not really not if you're British the horse had diabetes there we go and we will leave things on that cheerful horsey no gentlemen very spirited conversation thank you for coming on the show today we'll be back again in early May with a new episode of Goodfellows on the behalf of my colleagues Neil Ferguson John Cochran H.R McMaster all of us here at the Hoover institution thanks for watching thanks for your support and we will see you in early may take care therefore I am I think of course you are my life [Music] Delana want to hear you you say is that that sort of thing I gotta think I need my thing baby that ain't my thing baby yeah yeah I got the teeth of the gig I got the teeth for the job baby but I'm not a good fellow now I'm a naughty boy what
Info
Channel: Hoover Institution
Views: 101,136
Rating: undefined out of 5
Keywords: Artificial intelligence, ChatGPT, GPT-4, Large Language Model AI, artificial weapons, Skynet, China, Eliezer Yudkowsky, Reid Hoffman, Crime and Punishment, Pentagon, intelligence, classified, Jack Teixeira, Marjorie Taylor Greene, Daniel Ellsberg, Pentagon Papers, Edward Snowden, Moody Blues, HAL 9000, 2001: A Space Odyssey, Demon Seed, Austin Powers: International Man of Mystery, fembots, Concord, Lexington, Battle of Trenton, marijuana, legalization, 420 Day, Grateful Dead, GoodFellows
Id: k1uojP_rV6E
Channel Id: undefined
Length: 60min 5sec (3605 seconds)
Published: Thu Apr 20 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.