Daron Acemoglu on Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome back everybody for another webinar organized by Princeton for everyone worldwide we're very happy to have Tyrone asimobi with us from MIT hi Daron Hi Marcus thank you for having me great to have you Tyrone will talk about power and progress and it's based on his new book which just came out uh um together with Simon Johnson so we will hear about his book today and it's uh as usual a big book he has stood I think in this trilogy and he will talk about power and progress let me give you some few opening remarks I think the main idea is that progress is technological advancements and the question is where is the technological advancement coming and what I will do is I will just look at a simple production function and the or can it be you know tftp productivity can it be labeled productivity or some Capital productivity or it can be the production function you know the interaction between the two and the big question is for example if you increase labor productivity will this be leading to an increase in labor demand higher wages and at least some shared prosperity and helps also you know the workers as well and only the capitalists in the sense and it depends very much what the elasticity of substitution is between capital and labor so for example if the perfect substitutes and labor productivity goes up workers will benefit a lot and that's leads to the shared Prosperity similarly if you have Douglas and expenditure Shares are constants if labor productivity goes up workers have to work less and get the same income shares and it's still benefiting them but if it's a leonte if any of this complements limited structure then actually you can save on the labor input and have still the same output so these are the isoch ones and uh nevertheless so there will be fewer workers working and the labor share is actually going down and this way it's a cost cutting so whether it's a Prosperity she had Prosperity or cost cutting hurting the workers or benefiting the workers is productive increase depends very much on this elasticity of substitution of course if you have tfp growth that also is beneficial for all of society in a sense you know Marx always thought that you know if we have stenological progress and capitalism will ruin itself and it was this error essentially that it was also shared at the prosperity with the rest of it now what really matters then is actually the production function the elastic substitution between the two inputs but it also the curvature of the production function matters a lot so I have another little figure to illustrate that so what what you get as a work you get the marginal productivity of Labor so the F Prime with respect to labor that's the slope of the orange production function here and if you think what you get from the total output or what the workers are getting is essentially F Prime Times the labor input what F Prime is the slope of the tangent line here and what is f Prime Times labor that's essentially the height of this little vertical line here so because you know this tangent is is just going down here and a one and if you multiply it by capital L then you get this vertical line and the rest goes to the investors now let me propose a technological advancement which leads to a much higher productivity function and this higher productivity function then um you know leads to increase in margin marginal productivity of Labor as well and what you can see here there's an expansion in output that and this goes to the workers because now marginal utility productivity is much higher of Labor but it also meets the decline what the investors get so in this case it is the case not only does workers get more they get more because output is more but they get also much much more and actually the investors get in total get actually less what does this mean if the investors decide whether to go for this productivity enhancements they will actually not go for it because they will hate it that the investor share is going down in this case so in both examples it is the case that actually you know what the productivity will increase might not be beneficial for the workers in this example now of course you can say some operative enhancements don't show up in wages at all because they are part of the household production function and if I use my iPhone and I use my YouTube if I have to fix something my house at home you know YouTube is very useful uh to fix things and look things up so there might be a better matching of movies and vacation as well finding something on the Airbnb and so forth and that's also increases the wireless or production function but the big question I think the Iron Will address is who has the power to decide which direction the progress goes and who decides it so in a sense who is deciding it to a large extent the big basic research is decided by the military arm of the government so dogpa is deciding so the jet engine internet computer chip Telecom all of this essentially was initially decided by the military uh expenditures to enhance military uh fighting capacity what is it Steve Jobs Elon Musk same Altman or in the case of uh you know vaccines Google sahin or Islam tourism some immigrants Turkish immigrants to Germany who developed them in great critical for mnra vaccines or is it the Capital Market is it the venture capitalist ecosystems or the large firms so that's you know that's I think I think the wrong will go into that and of course there's a was often it goes in the wrong direction to think about Health and Drug development there's of course less resources devoted or develop a new malaria vaccine more we are ground the question is is this really more important uh than the other one and there's also a question to what extent you know there's different stages of innovation there's a fundamental Innovation then there's a late implementation so putting things together like what Apple did on the iPhone is putting everything nicely together might be the final implementation stage and who is deciding at what stage so we can also think of different stages where decisions will be made finally you know we this new Innovations leads to changes and what the speed of the change and this relates me to my favorite topic resilience essentially if you slow down the speed of change do you have some extra time does it give it some time if your disruptions you can bounce back more easily or does it lead to more adverse feedback loops sometimes when you can react it can lead trigger some adverse feedback loops so that's essentially still an open question whether that helps and does the Slowdown really help the good actors or the malicious actors so it's that's often the argument essentially if you slow down the AI development now it just helps the malicious actors to catch up rather than the good ones so let me conclude with uh poll questions that are on put forward and here are the answers so the first question was they are strong are there strong automatic forces that ensure that most people will ultimately benefit from the technology technological progress and to expanding productivity and social capabilities do you agree with that it's true maybe or false and most of the audience thought yeah it's true 45 maybe 34 and false thought 21 so it's most of them thought it's true then second question more generally or very specific generative artificial intelligence is going in a direction that will spare had important advances do you think this is really something important 57 said that yes 40 may be and three percent it is not the case I would say it's definitely the case but I might be biased uh uh and the third question is the main problem the tech industry right now is bad regulation that is slowing down the pace of advancements light regulation would actually improve things and only I think three thirteen percent that it's that's true 36 thought maybe and it's false 51 percent so people think that's not really Regulators are not really slowing things down might be different from continent to continent no as well but now we're here to the wrong who is you know devoted I think three decades of his life to these questions and he has summarized or put everything nicely together in a book and we're looking forward to this presentation thanks again darawn for being with us and we're looking forward to your presentation thank you Marcus it's a pleasure and an honor to be on the Marcus Academy to talk about these issues in general and also about the book that I co-wrote with Simon Simon Johnson so thanks for giving us the opportunity and this was and as usual this is my second Marcus Academy I was very impressed by Marcus's excellent introduction the first time so this time I'm less impressed because I was expecting it but it's an excellent introduction still and that and then your answers to the questions really frame the issues in fact what I'm going to try to do is to swim against the current and try to argue that the answer to the first question should not have been true probably false would have been my choice and that is an important part of the conceptual framework and reading of history in the book that I'm gonna try to share and then the uh the answer to the second question should have been more maybe rather than definitely true that generative AI is taking us in towards a uh broadly beneficial Direction so but in the process of doing that it will also be uh my pleasure to share some more of the arguments of the book so I hope uh you can see this yes so this is uh Power and progress our thousand year struggle over technology and prosperity and as the title suggests we're trying to think about issues of uh technology and uh and progress but also at the same time uh put this in a historical perspective and I'll try to argue why this historical perspective is important and the context of course as Marcus explained and the beginning questions were also framing is in AI the advances that we're seeing right now are truly impressive but in the midst of some of the concerns and much of the excitement I think the questions that are insufficiently asked are who controls the future who's really setting the agenda because there isn't a preordained path of AI or any other technology so there are decisions that people make entrepreneurs government leaders people in society in general about where technology will go and who will benefit from that who will control machine intelligence and what is it that we want from machine intelligence and these are of course foundational if we want to view the shared Prosperity question in a broader light will generative AI serve only a technological Elite or underpin broad prosperity as in every technological transition there are many choices here and that's one of the first points that Simon and I try to emphasize and when there are choices there are Visions whose ideas whose Visions whose ideology is going to be crucial because that's distinct but very closely related to the question of who's gonna gain so if you look at previous periods you'll see a lot of the inspiring leaders who are shaping the path of Technology we could have picked many examples but few of them are as colorful as this gentleman foreign who played a major role in some of the most important technological developments of the second half of the 19th century he was a techno Optimist of his age and he was also an optimist about how technology Commerce and uh grow their progress were intimately linked and that optimism and his great skills and Leadership and vision were crucial for his uh single-handed almost single-handed campaign to have the Suez Canal built which was very successful few other people could have managed to bring the you know thousands of investors around Europe financing it convince the French and Egyptian authorities have a particular Vision that how people would make money from this and how this was linked to inter-oceanic shipments and in many of these things he became completely successful in terms of actually spearheading others to do the technological work that was necessary for his Canal digging and at the end of the Suez Canal he emerged as perhaps if lack of a better term the Elon Musk of his age where everybody saw him as the uh visionary would a strong interest in spearheading the next level of advances but it also made them more hubristic than he already was and he brought the same optimism to the building of the Panama Canal which turned out to be a complete failure both because he did not completely understand the conditions that he was facing on the ground and also because the technological Vision that he had for the Panama Canal just didn't seem to work in particular he was so keen on building canals without locks because locks would slow down traffic and slowing down traffic was unacceptable to him but it turned out that wasn't feasible in Panama and the outcome of this was Financial ruin to thousands of people but even worse 22 000 people died trying to build the canal is it always an individual because you're emphasized the role of the vision and vision is typically comes from individuals or is it more like the vision comes from interaction of many people's together it does come from the interaction of many people so that's why ferdinander lesseps is very interesting because it's actually more individual than in many other instances so it's so you can tell the story of fibrous but it's General it's actually a collection of people so in today's world and I'll come back to that you know it's not just one person but there is a general culture of Silicon Valley that views progress in a particular light machine intelligence in a particular light digital Technologies in a particular light and you know economic growth in a particular light so there is some Confluence there and and we'll come back to that but that's a very important Point thanks for sort of bringing that up so the question here is this Vision issue is so important precisely because Simon and I argue that the answer to the first question that was in your poll is more complex than just it's being true that there isn't a guarantee that advances that increase output and our capabilities are automatically going to translate into shared prosperity but in economics in fact as Marcus explained there are strong reasons for thinking it may do so and in fact if it's going to lead to Shared prosperity of course we know as Marcus also emphasized it has to work through the labor market most of us earn our living in the labor market so if it doesn't really increase wages or improved conditions in the labor market it's much harder for a technological advance to lead to Shared Prosperity so why do many economists and here you know I would also refer to my own book on economic growth I wrote in 2007 which you know has this pretty widely communicated throughout its pages why do economists think that there is a fairly strong reason for this type of shared Prosperity from technological advances and Simon and I call that the productivity bandwagon productivity just like a powerful engine is going to pull people together with it and the mechanism is easy First Technology improves that raises productivity and as productivity increases firms rush out to hire more employees and that pushes up wages and workers also benefit now I'm going to try to argue why the productivity bandwagon is not as secure and that's the reason why I don't think we should say automatically technological advances will lead to these several benefits but before doing that let me just give you two examples of revolutionary Technologies for their age that don't seem to have generated this type of productivity bandwagon so the medieval period you know sometimes called the Dark Ages was far from dark it was full of technological advances that really transformed both Agriculture and other sectors of the economy and one of the iconic ones are the windmills that improved productivity quite significantly in many activities so you might expect from the productivity bandwagon that these will Mills would also improve the conditions of the pests but in fact there is no evidence that peasants did any better out of this or many other Innovations of the era and most of the additional output agricultural output that was generated by windmills and things like Horseshoes and better crop rotation was captured by a narrow Elite the clergy and nobility with very little change in the living conditions and wages of peasants even more Stark is this machine here the cotton gin which was revolutionary for the United States the U.S South had a variety of cotton that couldn't be cleaned with existing gins and so cotton production was essentially minimal in the United States and with Eli Whitney's uh cotton gin and some related Cotton Gins at the end of the 18th century the uh southern economy was completely revolutionized in the United States in a few years became the largest exporter and producer of cotton in the world but the workers who did that cotton production the enslaved people of the of the United States did not have any benefits in fact they conditions worsened because they moved into cutting plantations where work was much harder and they moved to the deeper South where the general repression was more intense the reason why go ahead Marcus so these are two examples where it's actually probably the technology so there are two reasons for that one is this technology data assist your substitution in a sense or it could be the political power the debuggling power is Shifting would you argue in this case this was multi-thenology aspects or it was more the the bargaining power between you know say the capitalists and the workers excellent so you've pointed out the two pillars that I was going to be a little bit more uh explicit on in a second but it's it's good for you to ask it at this point so you really need two aspects for shared Prosperity one is that technology changes in a direction as you also explicated that increases worker marginal productivity and second that workers have a chance of benefiting from that higher marginal productivity so in these two examples is the coercive structure of the economy that made sure that in fact employers would get most of the benefits and it could change the structure of production as they did in the U.S South even to the detriment of the workers who are becoming more productive and now I'm coming to the other aspect of it which is the nature of technology and then I'm going to pull the two together and the same sort of institutional issues whether workers benefit from increases in productivity because of bargaining because of coercion or other issues is relevant in the industrial revolution as well but another aspect is very visible in the industrial revolution that the nature of the technological changes were very anti-labor and in particular the early stages of the Industrial Revolution are characterized by the factory system and textile Machinery that automated work mechanized work that skilled Artisans used to perform what that resulted in is that there was a lot of automation of human work and workers then were also put in factories where there was much closer tighter monitoring of what they were doing so the combination of these two things was that actually the marginal productivity of workers did not increase despite the fact that new technological advances were being made and this was a very important part of the reason why for the first about 100 Years of the Industrial Revolution You Don't See Much increases in wages and at the same time working conditions are worsening for workers by the way that wasn't all that was going on in the industrial revolution the factory system also was a revolution in how discipline was imposed the spirit of that is actually captured by this contraption here which is famous from Michelle Foucault's philosophy and uh Guardians of the Galaxy movies the panopticon of Jeremy Bentham this was never actually implemented even though Bentham thought that it was very easy to put to practice in hospitals schools prisons and factories but no matter many employers organize production in Factory systems that intensified monitoring and reduced autonomy and lengthen working hours which was another important part of the reason why you cannot talk of shared prosperity in the first half of the 20th century and the other was the way in which the public infrastructure was completely uh unready for the very large number of people who were packed into cities pollution and infectious diseases became rampant life expectancy may have declined to as little as 30 years at Birth uh in the British cities at the beginning of the Industrial Revolution so all of these may appear like history and we should learn from history but also have a healthy distance from it meaning that you know we are fortunate to live in a post-industrial world in which things are much better leaders are much more enlightened science plays much overall yes all of those are true and indeed modern times are different and we see some of that difference very visibly if we look at the recent past but there are also some strange parallels and one way of seeing that one can ask another quick question would you in the 19th century would have said we should prevent these Innovations great question or what do you say we have to go through this Valley in order because if you look back I'm gonna come to that again your why several steps ahead of me Marcus but since you raised it now let me answer it because that's a very very important problem very very important question and I think part of the reason and perhaps this should be even clearer in the book but part of the reason we wrote the book is because that is the wrong question so that is the wrong question and unfortunately that's the way that many economists frame the question you're either for or against technology so think of Technology as a river so the question is are you going to let the river flow or are you going to put a dam and block it completely and and if that it comes to that you might say well look you know we've created a lot of casualties along the way but blocking it would have been even worse and if you wanted to block it it would be a lot High then that's the worst thing no we think that's the wrong choice there are ways of redirecting the river and if you redirect it you may have many or most of the benefits or sometimes even more of the benefits without some of the costs so in other words we do not have to go through the Savages of the early Industrial Revolution we don't have to send children as young as five and six to call miles to work 18 hours a day and push carts with their heads uh until they have brain damage in order to benefit from industrial technology nor do we need to just singularly follow a an agenda or automate everything and sideline humans to get the benefits and that's I think the most important message that we want to convey and that's the sense in which progress is not automatic you can create damage that damage can be very long lasting or you can try to reshape the path of technology to some extent as well as institutions in order to get the better out of it and and I'm gonna try to articulate why that's actually particularly relevant for today in the age of AI but also very relevant for post-war growth and that's what this picture was sort of meant to start the conversation about post-war growth and what it depicts are the evolution of real wages of 10 demographic groups distinguished by gender and education all the way from workers with less than High School in Orange to workers with more with a postgraduate degree in dark blue everything normalized to zero so that you can trace the evolution of real wages or the growth of real wages and what you see here are the two faces of the coin of the post-war industrialized Nation growth first you have this phase of very shared Prosperity here I'm showing it from 1963 onward you can see all of these 10 curves are essentially on top of each other indicating that they're all growing more or less at the same rate that it's an impressive rate of about two and a half percent growth in real terms per year and if I extended this picture to the 1950s and to all the way to 1949 using different data sources and somewhat different educational categories you would see in fact uh overall low education workers are experiencing faster growth in their real wages than than some of the higher education groups so that's the shared Prosperity but then from 1980 you see something you know quite different a sea change relative to what was going on in the three decades following World War II and a little bit more reminiscent to the non-shared prosperity of the early Industrial Revolution you see the gap between these curves opening up but even more strikingly and very unusually for modern industrialized growth the real wages of low education groups are actually declining look at what's going on to high school dropout men or High School sorry high school graduate men in green and high school dropout men in Orange so this is a very extreme form of non-shared prosperity of course with a little bit of sand in it in a sense it doesn't need to be due to technology we also included hundreds of millions of Chinese in the labor force so there's a lot of globalization effects as well leading to a similar Force here so absolutely one has to disentangle I guess absolutely and uh and globalization in general is very important uh if we go a little more technical you also want to distinguish offshoring aspect of globalization from the goods trade final goods trade both of them have mattered as well as technology and in some of my work with Pasquale Restrepo we try to unpack those now again going a little bit more technical so the forces that I'm going to argue in a second uh more explicitly that is very much related to this is automation and Automation and offshoring act in tandem globalization also creates tensions but it doesn't create the same degree of distributional effects on employment it has had very large impacts in the United States as documented by the seminal work by outer Dorne and Hansen for example as well as some of the work that I've done uh but but the the uh I'll show you in a second the role of offshoring and automation has been much larger and that's because of the bias that things that eliminate labor from the production process play and correction I have is that here you see that actually the less educated people you know suffered more than the well-educated what do you expect with AI to reverse this because you know it fits much more you know well educated people than Factory okay on that question I'm gonna hold off okay I'm gonna come to that that's another excellent question but for that one I really want to hold off definitely uh it's the first order question but I have I have three slides on that so I'll I'll I'll come to it in a second no no it's good it's good you're asking these these excellent questions let me just also point out that this is not just a US phenomenon the specific way in which this has happened the way that globalization impacted and the way that this was against the background of low institutional constraints on wages via unions institutions and social norms was unique to the United States but the increase in inequality and the stagnation of the real earnings of low education workers is very common across the industrialized world so for example the Omnibus measure of inequality to Genie coefficient increases pretty much everywhere in the oecd uh even though us was already ahead in terms of inequality and has had one of the largest increases so uh so I think the we need to have a holistic interpretation of how growth was shared in the decades that I showed you the data for in fact starting uh from early on broadly early on in the 20th century and why that shared Prosperity model broke down and that's where exactly the point that Marcus already anticipated is we really need to understand what are the pillars that support productivity bandwagon and I think you can see them in the Auto industry in the very early Decades of the 19th century and I would say the keys are new tasks in worker power so this is a picture from the Rouge Plant of a Ford Motor Company which was a Pioneer in introducing new machinery the assembly line system decentralized electricity and much better organization of the factory so these may sound somewhat related to Automation and they are but that wasn't the only thing that Ford did while at the same time he was reorganizing the factory he also introduced a whole lot of new worker tasks so as electricity was introduced then you also had better machinists more technical tasks More Design more repair uh better use of human workers in the process as well as a whole slew of back office white color tasks that became very widespread in the car industry and in other Industries as a result of this from uh you know ten thousand twenty thousand cars at the beginning of the century you know the United States was producing millions of cars in the 1930s and 40s and the the number of employees in the car industry increased to over four hundred thousand so this was the key that you see that as productivity is increasing also labor demand is rising because of the new tasks that the production process now entails but secondly also the car industry was at the heart of worker organization workers required better treatment from the employers via their organization they also bargained for higher wages obviously better working conditions but also for more new tasks and training to go with it so this was for example one of the iconic strikes of the period the sit-down strike of 1937 against GM that is emblematic of workers becoming more organized and wanting to share in the benefits that that the new technologies and new organizations were bringing to the industry so why did this model break down in the post-1980 United States for example where it broke down because both of his pillars got emaciated so you see now the car assembly lines of today industrial robots have increased productivity massively and I'll emphasize this again Simon and I are not against automation we think automation is fantastic it's great and has to be part of the future but the problem is that when you contrast it with the Ford motor Factory here you see only automation you don't see the workers here in the Ford motor Factory you had all of these workers centrally involved in the production process so here you have the automation but none of the new tasks going with it and that was too much focus on automation so now coming back to uh Marcus's question uh oh sorry but the workers might just go into other jobs which are more creative and less than being on a simply line absolutely and that's I can't give you that I can't communicate that via a figure but we argue there wasn't enough of those other tasks in other Industries as well but do you think it's more again coming back a technology thing or it's more a work of power having influence on the production process I think both I think both but here is one picture that tries to summarize my work with Pascual Restrepo that emphasizes the importance of technology so here is the uh the same period that we are looking at now each dot represents not the 10 demographic groups that I showed you but more detailed demographic groups distinguished by age education ethnicity and gender the size of the dot is how large that demographic group is the color coding again shows you the education so you can see the educations on the vertical axis I have the 1980 to 2016 wage change so from the beginning of the inequality period to the end the more recent period and the most interesting part here as you can see that many of these demographic groups are again below the zero line so they're actually experiencing real wage declines and the key right-hand side variable I want to emphasize is automation so on the horizontal axis I'm showing you Pascual and my estimate of what is the share of tasks that these demographic groups used to perform in 1980 that have since been automated and you see a really strong relationship here uh this variable just this variable by itself explains somewhere between 50 to 70 percent of the between group inequality in the United States and equally important you see that uh all of these groups that are experiencing significant real wage declines post 1980 are those that have experienced a lot of automation but it's not just technology exactly like Marcus emphasized also how we share the gains change and that changed because of Institutions and because of visions the Visions there are many fathers but let's single out one of our own Milton Friedman who articulated the idea that corporations should just look after their shareholder the social responsibility of businesses to increase its profits he prophesied and that was articulation a justification hence a powerful vision for companies often squeezing workers harder to in order to transfer more value to their shareholders and the main organization that could have stood against it the labor movement got much weaker during that period both because blue color workers became less important but also because of institutional changes for example Reagan's very famous firing or professional air traffic controllers in the strike of 1981. so both pillars of shared Prosperity got much weaker in the post-1980 era I mean the different reading of Milton Friedman is also that if you don't have this Focus the CEOs or the the important employees in The Firm just go for their pet projects and that's why you need that yes that is would be a alternative reading and I'm sure both of them are taking place especially we call this the Friedman Doctrine with the Jensen Amendment so Michael Jensen's ideas of take the free cash flow out Etc were important but we also argue and provide evidence and provide evidence in some of my other work but let me not get into that that the idea of taking from workers giving it to shareholders via Automation and Via not giving generous wage raises was an important part of what was going on as well so now we come to your question Marcus and that's Central can the AI age of AI be different and my answer Simon and my answer is yes AI has a unparalleled capability to complement knowledge work that could increase worker tasks increase worker productivity and directly or indirectly help a variety of different skills but just like whether the gains from corporations benefit shareholders or workers just like what you want from canals what just like what you want from Industrial Revolution Visions are going to matter and ever since the beginning of AI two very different visions have been pulling it in different directions and I think this is really Central to recognize and it is particularly Central to recognize because one of these Visions I think is much more likely to lead us to a ruinous path and it is very dominant today that first vision is what for lack of a better term call it the touring Vision which is that of autonomous machine intelligence which can do and should do things better than humans if you have that Vision it is not natural that you use go for automation because the measure of your success in machines is how well they do like humans and hence replace humans even the word machine intelligence I think pushes Us in that direction why should we want machine intelligence but the second Vision which we call machine usefulness is a very different one where AI just like other things should be tools for humans and this was actually very successfully implemented for quite a while many of the things that we depend on today like the mouse or hypertext and how hyperlinks even the internet came out of this vision for example Douglas engelbart's uh big demo mother of all Demos in which he uh showed both the mouse here is the first version of The Mouse and the hypertext was very much an outcome of this vision of using machines to make humans more productive and make computers easier to use for humans the problem is not just one of the quality go for it yes I just want to understand the way you know early Innovation was mostly replacing muscles and and now it's about brain yes and you one could argue that you know people the workers they have less political power than you know well-educated people and that's why we might go for a different outcome now but it's probably also unfair that the workers you know we replaced and pushed their wages down while the well-educated clause we go for different strategy because they're better politically connected well that would be I would say that would be great if we somehow as knowledge workers which if it's conceived broadly to include not just people like us and journalists but also include all sorts of back office work office work uh you know simple routine office work we became more powerful and we reshape the path of AI more towards machine usefulness but going against that I think the touring vision and the power of Silicon Valley is powerful enough that I wouldn't bet on it and that's why I think again progress is not automatic you know push it down on a particular example so chat GPT is it a usefulness or replacement could be either it could be usefulness if we use chat GPT as a system for information retrieval curation and provision for better human decision making or complements to human creative work but it would be replacement if we use it for simple writing tasks or other things or some other negative things if we use it for creating more misinformation and more room for digital ads and I think the path of chat GPT as malleable as other Technologies growth general purpose technologies that as I emphasize so that's why this is a period of important choice it's a critical period because we have these choices with very different implications let me put it this way for you jet GPT might be very much a machine usable enhancing your abilities but for your research assistant it might replace him so those are additions for different people differently you know it's very hard classifications that you know it could be useful to some skills and replacing other skills and again some of that is inevitable but first of all when I say it could be more in the human complementary Direction I would say that's for generative AI in general rather than chat GPT now that's more in the weeds and perhaps this will come up in the Q a session but I think the architecture of chat GPT is already misaligned with more human complementary tasks so already more Automation and other sort of things that sideline humans is built into the cake of chat GPT the way that it has developed in the way that it's been trained but that's let's let's leave that to q a if you want but the other thing I want to point out is that you know if you fixate on machine intelligence too much the the main problem or the only problem isn't just inequality you also don't get the productivity benefits that you were hoping for and the reason for that is what Pasquale Restaurant called Source automation so if you use a lot of these machines for automation but they're not actually that good at what they do or the humans that they replace were not that bad you're not going to get big productivity gains but you may be leaving aside lots of low-hanging fruit where if you could use them for complementary purposes because then you could have made the work humans more productive to the idea that oh If you're sort of questioning automation you're against progress that's completely wrong because as well as automation if we do the new tasks we actually Elevate the productivity of workers and that could have a lot of benefits for our macroeconomic performance now of course if you don't believe me ask chat GPT so we asked chat GPT will generative AI reverse these long-term inequality trends of widening inequality weaker worker power and Chachi PT said perhaps but probably not it's not a magic solution if generative AI is used for replace workers instead of support them it could have negative consequences now we don't know whether chat GPT read some of our papers and that's why it gave this answer but on this one we agree with chat GPT now the other part of the Industrial Revolution that was problematic was the use of new organizations and new technologies for surveillance and we're redoing that again uh we're seeing that both in authoritarian and Democratic countries and in particular in a worrying way it is accompanied with centralized control of data so people draw a big distinction in in the U.S the government is not controlling your data all that much but perhaps it's not much better when Facebook and Google centralize the control of data and information versus the Chinese Communist Party through its social credit system so I think in both cases it really creates an environment in which worker productivity and worker autonomy are damaged and it could also have negative consequences for democracy now all of this may Focus may sound like it's just for the developed world but it has implications for the developing world because AI when adopted in the industrialized world is going to remake the international division of labor and we see that already from the previous round of blue color uh work where as robots and other advancement equipment were adopted in the west we see that routine work decline not just in the west but also in the developing world as well and both in manufacturing and services so if AI goes in a Direction that's human replacing that could actually have massive distributional effects around the world not just in the industrialized countries so can you come back to the figure I didn't fully understand this is showing the change in routine versus no routine work in manufacturing and services across these countries and you see that in the emerging world you're also having a decline in routine work at the same time as you're having decline in routine work in the United States and other industrialized nations I think wow so let me conclude by arguing that this path which I said is not inevitable and the element of choice is so important let me sort of provide Simon and my perspective of what we need to do if we want to reshape technology in a way that we think is socially more beneficial and we think that it has to start with the right aspirations and the right change in narrative in terms of aspirations I think if we agree on the goal which is making technology more pro-human I think that would be a big advance and that will have to be accompanied by a change in narrative where the path of technology is neither preordained nor in the hands of you know just very talented entrepreneurs such as Sam Altman and Elon Musk but it's something that Society at large has to decide and the further we are from like blind techno optimism I think the more constructive that debate can be but just like in the industrial revolution where in the second phase things started going better because of institutional changes we also need create counter railing Powers you know this has to be at the Civil Society level you know Ralph Nader was one of the sort of symbols of that many of the consumer protection legislation would not have been feasible without the movement that rough neighbor epitomizes and we have to come up with new ways of creating worker voice it's not just the Optics versus the essence that really it's bad when the future of AI is discussed between senators and the CEOs of the major tech companies no Civil Society Organization no worker voices represented in that debate I think that is a summary of what's wrong with the future of AI at the moment so and what we can do is actually very much in line with that machine usefulness Vision or the vision of hackers and computer uh innovators of the 1960s and 70s redirect technological change for towards new tasks for greater worker productivity better information for workers and human decision workers greater human uh and worker autonomy empowering workers and citizens and this is in fact people like Ted Nelson and many others like him thought that computers were originally going to do so uh he uh yeah he had a vision where computer power would or computer power would go to the people away from large corporations and this wasn't completely misguided because there was that potential we just didn't use that potential in the same way that social media could have truly democratized information again we did not use that path we use the path of centralized control that was then monetized through negative externality activities and just to show you that this is not just a pipe dream we can actually redirect technological change and there is no more current example that illustrates that very clearly then energy where even a small amount of government policy and Civil Society engagement completely changed the face of Technology in 2000s even though the climate change challenge was as clear as it is today there was no way of transitioning to renewable energy because solar wind and other forms of Renewables were an order of magnitude more expensive than fossil fuel today they are cost comparative with fossil fuels with a small amount of support there was a complete redirection of technological change and what Simon and I argue is that we can and probably should do the same for Production Technologies and especially AI today and a question that's a small because Germany was really pushing the mid-2000s a lot they put billions behind that to get the scale of the solar energy up yes and that's where you see the results of that around starting around 2010 it took a really long while especially the German demand was very important for the Chinese solar producers which started exporting them at first they weren't very efficient and then it really so you had both some important innovation breakthroughs and economies of scale in sometime around 2010. I can ask you so the big question is who is deciding which direction we are going uh so you you mentioned Ralph NADA and others but you know what what's the political process you have you envision here you know who is making the decisions at the end of the day who's making the decisions about whether we are a democracy or not we had a constitution at some point the people decided right but but take a country that is not Democratic who decides whether we become Democratic well you know at at the end of the day is society itself but sometimes those decisions are delegated correctly or incorrectly to some leaders and that delegation may be part of the problem so right now the future of AI is decided in a few boardrooms and in a few offices in Silicon Valley but that's because we've delegated that power to them unwittingly we've asked generally governments to regulate lots of things and we've asked the media to oversee what's going on in powerful organizations we've asked Academia to shed light on these things and I think we have all sort of given up our responsibilities and we've become mesmerized by the tech industry and we've let them play the lead role with very little regulation and oversight you know the media that's supposed to cover the tech industry rather than shed light on it for much of the last 30 years has been completely mesmerized by it so as a result the tech industry has gained a oversized power about the future of technology even though that choice has fundamental consequences for people around the world Your Role I think it detective if you just look at the pandemic you know the tech advancements really helped us to overcome the pandemic be it on the technical side using zoom and so forth and also on the healthcare side absolutely but let me just push you the different continents going for different directions in a sense so I took you mentioned the Chinese going for surveillance with artificial intelligence Europe is regulating a lot before and even things are developing and the US is not doing so much so which Camp are you in are you more in the European camp or would out of the three if I'm characterize it in these three different ways of going about this issue yeah I would I would say Europe has the right idea and the bad implementation now of course you can be cynical and say and you'll probably be right that Europe has the right idea because it doesn't have its own Tech sector so it doesn't suffer the consequences of the tech sector losing profits but I think Europe has the right idea look at the gdpr for example I think it came at the right time or too late but it had exactly the same the right values it was going after exactly the right metrics but it hasn't been super successful why is that is that because regulation is bound to fail or you cannot regulate AI no not at all this is a very difficult problem so you have to experiment with different things and then when you see that something doesn't work because it creates you know reactions that enable companies to go around it then you have to change the regulations and I think that's that's where we're we're we're failing because we have lost the regulatory muscle but what you see in Europe as you said there's no Tech sector that's a reason for that you don't have any advancements in Europe or much less of an advancement and that holds Europe back in the long run so there's some cost Associated not just redirecting me or whatever it's also holding up there about this absolute extent and you're you know I I no Tech sector is is a bit of an exaggeration the most advanced chips are made in the Netherlands for example and that I would say is much more important than you know writing a few more game softwares uh you know so that's so it's not that like Europe is a complete Backwater but you're right so the way I would put it is Marcus is Silicon Valley is an amazing environment we are completely we should be completely grateful for the risk taking the energy the creativity that's Silicon Valley has brought over the last several decades and the United States has benefited from that but to presume that Silicon Valley has everything right and it should be completely unregulated does not follow from that we want to use that energy that risk taking but in a set of directions that are more in line with society's priorities so that's exactly the redirecting the river not blocking it so let me put a different aspect I would like to raise a different aspect so one philosophy is that you know if you don't have enough disruptions going on Society has a tendency not to go for changes and then there will be entrenchments so the certain powers which are empowered they will not be disrupted and hence you need occasionally changes and disruptions in order to get rid of this past entrenchments you're not arguing against that exactly you just want to redirect the new changes in a particular well first of all I think that perspective that you have articulated I can have a weak version or a strong version the weak version is that we need some amount of creative destruction I completely agree with that but a strong version that we should sort of celebrate disruption or that you know today what we are seeing is that sort of disruptive change inevitable I I won't agree with those first of all look under the name of disruptive change we have empowered the largest and most powerful corporations Humanity has ever seen you know Silicon Valley says they are celebrating disruption at the same time Google Facebook Microsoft Amazon acquire countless companies that could be competitors to them and many of them they actually sideline their Technologies completely so what kind of disruption is that and moreover you know disruption what type of disruption when the disruption is you know there's a monopoly and I replaced it that's great when the disruption is thousands of workers are thrown out of their jobs and uh and you create a lot of poverty that's a different type of disruption and I think we should use different bars for when that sort of disruption is acceptable versus not acceptable definitely we should not oppose all creative destruction as part of the market economy but you know becoming uh getting into a mindset where disruption is actually a good I I don't think that's warranted so the way I understand you you want to go multi-faceted approach but let me just point you to where you put the emphasis so one thing you could say just following what you said it's antitrust we have to do a lot in antitrust regulation another one is more on labor market regulation have some working Council worker councils income in companies and so forth um there might be different aspects how you can achieve that is there one particular one would say this is the most important one okay let's just do everything 100 you need a it's a it's a complex problem you need a multifaceted approach so uh certainly if the direction of technology is distorted which we argue it is then you need Market incentives and social incentives to redirect it one aspect for example that you know I didn't get that much into but why is it so bad that you know so much information is centralized in the hands of Facebook Google and a few other companies well it's related to how they're monetizing it so if they're monetizing it in a way that's misleading or it creates lots of negative social effects for example when it comes to political participation people's uh control of their own data and so on that that has negative effects and one of the policies that we argue in addition to the ones that you've mentioned are digital ad taxes the the current model where everything is scale scale scale and then you monetize it by making people more and more engaged so that you can send them more ads that really reduces the room for new business models to emerge and so digital that taxes are very valuable I think in an environment like that so that you know you first of all discourage the worst users of data and second you create room for other business models same thing Anti-Trust I think would be for the same reason you try to encourage other business models but also you may want to create direct incentives for competitions prizes X prizes government prizes for finding more human complementary ways of using AI we go through several examples in the book where I think there's a lot of progress potential like nurses you can have much better Services diagnosis cure prescription by nurses just like you know some of that is being done by licensed practitioners or nurses today in the United States if you use AI Technologies and better training of nurses same thing for teachers and much more individualized education programs or creative artists for even blue color workers so again I think we have to encourage that but I think an important part of this process is worker voice as well so I think work counsels as in Germany as long as they can be designed in a way that's not too conflictual workers working for reshaping work in a more human Direction rather than like some ideal of Communist Revolution I think uh that would be much more conducive to to to the side of a better future for technology but that requires the right sort of institutional embedding in which you have worker power and worker voice but not like a Mortal Combat between management and the labor movement so I think there it's multifaceted two more questions if I may so of course one is uh if we look at the large language models in particular you know the actual pyramid estimations and all that is all secret we don't know the actual source of open eyes uh GPT uh how much would you make this transparent in order to win and have more entry essentially in this market or would you say if you make it too transparent then people can abuse it and hence there's a need to keep it in transparent and because otherwise there will be abuse of this uh Technologies so how do you deal with the transparency issue in this Dimension which is a big issue in my view well I think uh the way that I think about this is that transparency would be great but I think it's not going to be possible the way that the architecture of generative AI is going full transparency is not going to be feasible but I think there are important things that we can do and that relates to the throwaway comment I made earlier on that the current architecture of gpt4 or chat GPT is not conducive to real human complementary work I think what you really need to do is to create what you might call Greater legibility greater understanding of humans so why a particular recommendation is being made how they can use this program and that is different than transparency but very very related so for instance if you have a program that makes a recommendation to me go left and it has no explanation there are only two things I could do either I could ignore it or I could follow it but neither way my cognition is not improved and if really what you want is to augment my skill my expertise to make me a better worker a better decision maker neither of these two paths is here that useful do you really need to explain to me go left because of this so where does that information come from is it reliable how much should I trust it how much my prior should I override because of your recommendation so that requires a for me to be able to interrogate the system B it requires reliability score some measure of uncertainty see it requires much more high quality data that goes into these programs so right now I think one aspect of that transparency that's problematic is how much of the chat GPT is coming from Reddit and how much coverage is coming from Wikipedia Wikipedia I respect more than Reddit now is it that suddenly some of the things that Reddit biases racism extremism crazy conspiracy theories in Reddit are seeping into it I don't know that so how do I decide so all of those things I think we need to more human usable AI sorry I mean this European initiative has of course this explainability but if you interpreted it too strict then you know any larger English but people themselves don't know it Open the Eyes themselves so so essentially exclude like in Italy Outlaw no no I mean I'm not encouraging outlawing on the other hand I would say that the fact that without any guard rails shot GPT was made available to people was also not you know the uh the peak of responsibility I mean what will chat GPT to the education system we have no idea what will it do to disinformation and resurface information we have no idea so going back to the what you said at the beginning I think we have to think about it out we have to be well we have to try it out perhaps on a smaller scale what's the rush so for these complex Technologies with many many unforeseen consequences I think you have to experiment I mean no large-scale organization would overnight change all of their Technologies at once would they so why would we are actually imposing that on Society at Large so I think experimentation you know well let's try Chuck keep it in a few schools let's see what it does let's try gpt4 or whatever your favorite large language model is on a few tasks and see what it does how it's misused but you know that's not the business model because hype is so important you know Market incentives are yeah we're replacing Google we're going to incorporate gpt4 into Bard sorry into being and uh and and take over the search Market you know that is that the is that the right future we want I think those are the questions that we should be asking I'm not pretending that I know the answers but the fact that we're not actually asking these questions is worrisome let me throw one more question at you uh so it seems like for all future Technologies there's a huge amount of investment upfront investment needed if you look at for example large language models also other examples either then Justified that we have this concentration of wealth in certain companies because these are the only ones which can actually you know foot the initial investment well again that's great question Marcus thank you and I'm not sure that we need them that's again the way that the market has gone so we said oh we're going to build this huge Foundation models and we're going to train them with you know hundreds of billions of parameters on the entire internet but an alternative is no actually we have smaller scale models that are more domain specific if that if we had done that that would be much more of an open field for many many companies to enter and some companies have entered and are developing domain specific things so the fact that you create a foundation model and you create and then you you create this hype around it and everybody starts using it and then you block other companies that's a particular market organization that we went for and again I'm not sure whether that's the only one that we should go yes you're I am very worried that we are going towards the duopolio foundation model with Google and Microsoft and that would be very bad for competition it would be very bad for the direction of Technology how many several I mean including Princeton developing their own smaller Foundation models in order to make it intelligible to understand better what's going on I think that's probably the big thing and you know I guess other universities will follow on this too um but typically we have the tradition to end with a positive note and you know I want you to end with a positive note that you know all these advancements do you think we can direct the river in the right direction well look I think there I let me give you the positive and the negative one down let's do first the negative and then end with the Positive okay so the negative is well let's suppose let me say it's both a technical and a social question so the social one is the negative I think we really require a different mindset a different way in which we approach Silicon Valley at different regulatory environment and there it's a big hill to climb technically I think it's very feasible look at in the renewable energy space look at an HIV AIDS where you know a small amount of pushback from society led to a major redirection of technological change and technically I think generative AI exactly like you said Marcus is a very flexible technology so I don't see any barriers in developing it in a very human complementary Direction so technically it's very feasible and that's why I think this is an urgent time to have this conversation you know people are talking about existential risk you know which may or may not be relevant for the 22nd century I don't think it's going to be relevant for the 21st century I think we should be talking about the current choices that we're making about the future of technology which is going to have tremendous effects on inequality on who controls information who controls data what happens to democracy in the next decade very good so let's do that and I think you will be pushing for that and make up a game pact on this with your book with Simon Johnson thanks again Daron for having it's great to have you on it on the what podcast and we stay in touch thank you for doing it for inviting me awareness this was a lot of fun thanks a lot bye bye everybody and thanks to everybody for joining us and hope to see you soon again thank you thanks
Info
Channel: Markus' Academy
Views: 9,925
Rating: undefined out of 5
Keywords: Finance, Princeton University, Macroeconomics, Economics, Inflation, Monetary Policy, Webinar, Debt, Markus Brunnermeier, Atif Mian, Princeton, Master In Finance, Social Capital, blockchain, economic outlook, keynesian economics, princeton economics, markus academy, princeton bendheim center for finance
Id: eGqaOhTq060
Channel Id: undefined
Length: 73min 9sec (4389 seconds)
Published: Wed Jun 14 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.