Andrew Ng on AI's Potential Effect on the Labor Force | WSJ

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
To sum it up to start. What would you say are going to be the biggest positive and negative impacts on the workforce from A I over the next say five years, I think it will be a massive productivity boost for existing job roles and it would create many new job roles. And I don't want to pretend that there would be no job loss, there will be some job loss, but I think it may not be as bad as people are aware of right now. Um I know that we're having an important societal discussion about A I's impact on jobs. And uh from a business perspective, I should find it even more useful to not think about A I automating jobs, but instead A I is automating tasks. So it turns out that most jobs you can think of as a bundle of task. And when I work with, you know, large companies will often many C Os who come and say, hey, Andrew, I have 50,000 or 100,000 employees. What are all my people actually doing? Right? It turns out none of us really know in detail what our workforces are doing but found that if you look at the jobs and break them down into task, then analyzing individual tasks for potential for A I automation or augmentation often leads to um uh interesting opportunities to use A I. And maybe one concrete example, um radiologist, we've talked about A I maybe automating some part of radiology. But it turns out that radiologists do many tasks, they read x rays. Um but they also do patient intake, gather patient histories, they consult with patients, mental younger doctors, they operate the machines, maintain the machines, so they should do many different tasks. And we found that when we go into businesses and do this task based analysis, um uh it often surfaces interesting opportunities. And regarding the job question, it turns out that for many jobs, if A I automates, you know, 20 30% of the task in the job, then the job, maybe it's actually decently safe. But what will happen is not that A I will replace people, but I think people that use A I will replace other people that don't, what, what are the uh the, the types of tasks that you're seeing? Um And if you can say what professions do you think are most, you know, where you have the highest concentration of those types of tasks. So some job roles really disruptive right now. Uh call centers, call center operations and customer support is one, it feels like tons of companies is using a I to nearly uh uh automate that or automate a large fraction of that. Um I think sales operations, sales back office, those routine talks are being uh automated. Um I think a bunch of others, I feel like we see different teams trying to automate some of the lower um uh you know, legal work, uh some of the lower level marketing work, a bunch of others. But I would say the two biggest I'm seeing are customer customer service. Uh And then may be some sort of sales operations. Um But I think there is a lot of opportunities out there. How do you think it's gonna change the role of CIO s like the folks in this room? I think it's an exciting time to be a Cio. Um One thing that my team A I fund does is we often work with uh large corporations to identify and then execute on A I projects. So over the last week, I think I was, you know, spent almost half day sessions with two fortune 500 companies when I got to hear from their tech technology leadership about the use cases they are pursuing and some patterns are seen every time we spend, you know, a little bit of time brainstorming A I projects, they are always way more promising ideas than anyone has the resources to execute. And so it becomes a fascinating kind of a prioritization exercise to just decide what to do. Um And then one of the decisions I'm seeing is after you do a task based analysis, identify task and jobs to, to for ideas or after your team learns, develop by A I and then brainstorms ideas. After you prioritize, there's the usual kind of, you know, buy versus build uh uh uh uh decision. Um And it turns out that we seem to be in a opportunity rich environment where quite actually what, what A I fund winds up often doing is often the company will say these projects only keep close to my heart. I will pay for 100% of this. But there are so many other ideas that you just don't want to pay, you know, completely by yourself for the development of and then kind of uh uh we then help our corporate partners build it outside so you can still get the capability without leaving, having to pay for it, you know, entirely by yourself. But II I find that at A I fund, we see so many start up and project ideas that we wind up having to use task management software to just keep track of all these ideas because no one on our team can keep straight of these kind of hundreds of ideas that, that we see and have to privatize it. Very exciting. So you asked the generative A I to keep track of all the tasks that generate what I can do. Oh, that'd be a, we, we actually use Asana to keep track of all the different ideas. Usually I summarize it. Um You, you've talked for years about the importance of, of lifelong learning. The enhanced importance of that in the um A I world, is it realistic to think that people will be able to reskill to educate themselves at a pace that keeps up with the development of this technology, not necessarily the folks in this room, but the people um who are, you know, whose tasks are being automated. Um Like how big of an issue do you think that's going to be the displacement? Because when we talk about technology and jobs, we always talk about like in the long run, you know, look, we used to all be farmers now we're not in the long run, it will be fine. But in the meantime, there's a lot of, you know, um dislocation. Yeah. So this is my, my really can answer honestly. I think it's real estate, but I'm a little bit nervous about it, but I think it is up to all of us collectively in leadership roles to make sure that we do manage this. Well, um one thing I'm seeing so the last wave of tech innovation, you know, when deep learning predictive A I labeling technology, whatever you call it started to work really well. 10 years ago, it tended to be more of the routine repetitive task like factory automation that we could automate with generative A I, um it seems to be more of the uh knowledge workers work that A I can now automate or augment. Um and, and to the reskilling point, I think that almost all knowledge workers today can get a productivity boost by using genus A I like right away pretty much right now. But the challenge is um there is reskilling needed, you know, we, we've all seen the stories about a lawyer generating hallucinated, you know, court citations and then getting in trouble with the judge. So I feel like people need just a little bit of training to use A I responsibly and safely. But with just a little bit of training, I almost all knowledge workers including all the way up to the C suite, you can get a productivity boost right away. Um And, but I think it is, it is exciting but also frankly daunting challenge to think about how do we help all of these knowledge workers gain those skills? Is that that problem you alluded to uh the hallucination problem? Um The accuracy concern is that, is that fixable with A I? Um or is it more that we just have to learn to use it the right way and assume an error rate? Yeah. So I don't see a path. I myself did not see a path to solving hallucinations and making A I never hallucinate in the same way that I don't see a path to solving the problem that humans sometimes make mistakes. Um But we figured out how to work, you know, with humans and for humans and so on, it seems to go ok most of the time. And I think because gen of a, I burst onto the scene so suddenly a lot of people have not yet gotten used to the workflow and processes of how to work with them safely and responsibly. So, um I know that when an A I, you know, makes a mistake sometimes where it goes viral on social media or draws a lot of attention. But I think that um it is probably not as bad as, as the widespread perception. Yes A I makes mistakes but plenty of businesses are figuring out despite some, you know, baseline error rate how to deploy it safely and responsibly. So, um, and, and I'm not saying that it's never a blocker to getting things deployed, but I'm seeing tons of stuff deployed in very useful ways. Just, just don't, don't, don't, I mean, don't use gen of A I to render medical diagnosis and output directly what drug to tell a patient to take that would be really irresponsible. But there are lots of other use cases where, you know, where, where it seems very responsible and safe to use. Do you think improvements on error rates will make the use cases, you know, increase the use cases? I mean, right now, maybe we'll never get to a point or not in the foreseeable future where you want the A I doctor to directly prescribe. But you know, are there other cases that are not optimal now because we're still figuring out error rates that will become more usable over time? Yeah, it's been exciting to see how A I technology improves month, over month. So uh I think today we have much better tools for guarding against hallucinations compared to say six months ago. But just one example, um if you ask the A I to use um retrieve augmented generation, so don't just generate text but grounded in a specific trusted article uh and give a citation that reduces hallucinations. And then further, if A I generates something you really want to be right? It turns out you can ask the A I to check his own work dear. A I look at this thing you just wrote, look at this trusted source, read both carefully and tell me if everything is, is justified based on the trusted source and this will squash hallucinations completely to zero, but it will massively squash it compared to if you ask A I to just, you know, to just say what it had on his mind. Uh So I think hallucinations is um it is an issue but I think it's not as bad an issue as, as people fear it to be right now. You've been involved in A I for for decades. Um And you know, the technology has been through lots of multiple hype cycles and declines and winters A I winters. And what do you think is different about this moment? You know, the last 15 months or so since the of the boom of generative A I are we, is this more lasting? So, you know, I think, um, so I feel like compared to 1015 years ago, we've not really had another A I winter, right? I think it's been growing in value. So today, so, you know, years back, I used to lead the Google brain team received the whole Google, right? A deep learning. Um And the economics, fundamental economics are very strong, using deep learning to drive online advertising, maybe not the most inspiring thing I've worked on. But the the economic fundamentals have been really strong for, you know, 10 ish plus years now. Um And I feel like the economic, the, the fund, the, the, the um the fundamentals, the gens of A, I also feel quite strong in the sense that um we can automate and augment a lot of tasks and drive a lot of very fundamental business efficiency. Uh Now there is one question, I think Sequoia, Sequoia posted an interesting article, you know, asking over the last year or last year. Uh We collectively invested, I don't know, maybe something like $50 billion in capital infrastructure, you know, buying GP US and data centers. And I think we better figure out the applications to make sure that pays off. So it's II I don't think we over invested. Um But, but to me, uh whenever there's a new wave of technology, almost all the attention is on the technology layer. So, you know, we all want to talk about what Google and Open the Eye and Microsoft and Amazon and so on are doing because it's fun to talk about the technology and there's nothing wrong with that. But it turns out that um for every wave of technology for the two builders like these companies to be successful, there's another layer that had better be even more successful, which is the applications you build on top of these tools because the applications that better generate even more revenue so that they can afford to pay the two builders. And for whatever reason, societal interest or whatever the applications tend to get less interest uh uh than the than the than the two builders. But I think for many of you um in organizations where you are not trying to be the next, you know, large language model foundation model provider. Um I think that as we look into the many years in the future, there will be more revenue generated at least you know that that better be in the applications that you might build than, than just in the two builders. That gets to a question that I find fascinating about this. What is the effect on the power dynamics in the tech industry and the economy more broadly and to what degree is this a technology that is, you know, a disruptive technology that is a bring in a new wave of companies that, you know, 10 years from now will be big. And even though we hadn't heard of them two years ago and to what degree is it just going to make Microsoft and Amazon and Google et cetera more powerful than they've ever been before. So I think the cloud businesses are decently positioned because it turns out that, um, uh, the, the, the because it turns out that, um, you know, AWS A zero GCP, those are beautiful businesses. They generate so much efficiency that, you know, even though I may have a huge bill, I need to pay them, I don't mind paying it because it's much better than the alternative, you know, most of the time, but they also are very profitable businesses. Um And it turns out that if you look at some of the generative A I start ups today, the switching costs of my using one start ups API versus switching to Aws or A zero A G CPA Google cloud, the switching costs are actually still quite low. So the moat um of a lot of the ja I start ups, I'm not quite sure how strong their moat is, but in terms of the cloud businesses have very high surface area. I mean, you know, frankly, once you build a deep tech stack on one of the clouds. It's really painful to move off that if you didn't, you know, design for multi cloud from day one or whatever. Um And which is part of what makes the cloud business such a beautiful business model. Um So I think a lot of the cloud businesses will do, OK, selling API calls and integrating this with the rest of their existing cloud offerings. Um And then the the Micronics are very interesting, right? So meta has been a really interesting kind of smaller for some of the businesses by releasing open source software, open source genus A I software. And I think um meta uh you know, it was my former team, Google Brain that released Tensorflow and I think that um uh it would make logical sense for me. So meta was really, you know, uh hurt by having to build on Android and I OS platforms, right? When Apple changed the privacy policies that really damaged Mesa's business. So, you know, it kind of makes logical sense that Mesa would be worried if uh uh uh Google Brain and my old team released the dominant uh uh A I develop platform tensorflow and everyone had to build on tensorflow. What are the implications on meta's business? So frankly, meta played his hand beautifully with opens sourcing Pytorch as an as a as an alternative, I think again, today gens is very valuable for online advertising. Um and, and also you know, user engagement. And so it actually makes very strong logical sense that meta would be quite happy to have an open source platform to build on, to make sure it's not locked into like an I OS like platform in the gen A I era. Fortunately, the good news is for almost all of us um meta's work and many other parties work on open sourcing A I gives all of us, you know, free tools uh uh to build on top of it, gives us those building blocks that lets us innovate cheaply and build many more exciting applications on. Sorry, not sure if that was too inside, inside the baseball on, you know, tech company marketing. No, II, I think, I mean, I answer for everybody out there but I thought it was fascinating and I want to come back to open source in a minute. But um from the point of view of Cios and other corporate leaders across the economy, I think there are lots of options coming at you right now. You know, lots of people trying to sell products, lots of people saying this service will change your business. Um And part of the job is, you know, figuring out what's wh what's chef, how do you do you have any advice on how to tell um in a moment where the technology is fairly nascent and fast developing, how to tell apart the sort of um people who have real solutions from the snake oil salesman. I'll tell you the thing that I think is tough. Even our VC friends right here on San, some of them on San Road. The one thing that's still quite tough is the technical judgment because A I is evolving so quickly. So I've seen really good investors here on San Road, there'll be pitched on some start up. Uh And sometimes, you know, someone, let's say open A I or someone just released a new API and start to build something really cool over a weekend on top of a new API that someone just released. But unless you know, you know about that new capability and what the start really did. I have seen V CS come to me and say, wow, Andrew, this is so exciting. Look, these three, you know, college students built this thing. This is amazing. I want to fund it and I'll go, no, I just saw 50 other star doing the exact same thing. And, and so I think that technical judgment because the tech is evolving so quickly, that's the one thing that, that I find difficult. Um And then maybe I'd say for, for Anthon work of corporate sad, we tend to work with corporate and go through a systematic brainstorming process. But II, I just mentioned one other thing that I think could probably in many cio s interest, which is we've all seen when we buy a new solution. You know, often we end up creating another data silo within, within our, our organization. And I feel like, um uh if we're able to, to work with vendors that, you know, let us continue to have full access to our data in a reasonably interchangeable format. Um that significantly reduces vendor lock in. So that if one vendor you decide to swap out for a different vendor in a month or two. So that, that's one thing I tend to pay heavy attention to myself is um if I buy it from a vendor, they'll do stuff with my data because I want them to. That's why I'm paying and for. But is that that transparency and interoperability to make sure that I control my own data and the ability to swap to have my own team, take a look at it or swap of a different vendor. This does run counter, you know, to the interest of all of the vendors that one right lock in candidly. But this is one thing I tend to rate higher than, than, you know, some of my colleagues in my vendor selection and buying process. Maybe it sounds like you see a world where the folks in this room are implementing uh A I generative A I um through a multiple multiplicity of, of different providers. It's not gonna be like, yeah, we, we're with uh Microsoft, it's gonna be, yeah, we use Microsoft for this. We use this, this company over here for that is that. Right. Like it's, yeah, but III, I would say it seems like the Microsoft sales reps. Uh, I'm actually, well, should we do a poll? How many of you have that Microsoft sales reps? Push copilots really hard to you? Yeah, I thought so. I forget everyone. Right. So Microsoft is great, you know, love the team really capable. Copilots can give a private boos, but there's so much stuff out in the market. I think it's, it's worthwhile to, to, to, you know, take a look at multiple options and then buy the right two for the right job. You touched on the hardware costs the amount that's been invested so far. How concerned are you about the hardware bottleneck and the lack of GP UTPU, you know, whatever one wants to call it. And, you know, Nvidia's relative stranglehold over the last year or two. And, and what do you think of what we reported as Sam Altman's plans to raise potentially trillions of dollars to solve this? Yeah, it'll be, it'll be Sam. Sam. Sam was a student at Stanford way back. So I've known him for years. Uh uh He's a smart guy. Can't argue results. Iii I don't know where we find trillions of $7 trillion in Washington. This is uh that lets you buy Apple twice, right? More than that. So uh it's an interesting figure to try to raise. Uh um you know, II I think that over the next year I think, I, I think in a year or so the semiconductor shortage, I think it will feel much better. Um And I want to give, you know, a MD credit, a MD and Intel, maybe. So NVIDIA has been uh uh Nvidia's one of Nvidia's mode has been the K A programming language. But am D's open source alternative called Rockem has been making really good progress. And so some of my teams, you know, would build stuff on, on a MD hardware. And sometimes I, I don't think it, it is not at parity, but it is also so much better than a year ago. So I think A MD is worth of a careful look at. Um And Intel is also, you know, so we'll, we'll, we'll see how the market evolves. Um You've, you've mentioned open source several times. I know you're a champion of that. Um It comes up in the regulatory discussion and II, I think one argument that resonates is well, if we have, you know, at least if we have these proprietary models, there's a handful of companies with these big powerful Lo MS that we can focus on, make sure they're doing the right thing to prevent this technology from being misused, open source proliferates. And you're talking about not five or 10, but 500 or 1000 or even larger numbers of people who have these tools. And you know, how do you know what they're going to do? With it and how do you control it? What's your answer to the people who have that concern about open source? So, I think over the last year or so, there's been intense lobbying efforts by a number of usually the bigger uh uh players in genitive A I uh that would rather not have to compete with open source. You know, they invested hundreds of millions of dollars right to build a proprietary A I model boy. Isn't it annoying if someone open sources something similar for free? That's, that's, that's just not good. So the level of intensity and lobbying in DC, it really took me by surprise is, is, and, and so the main argument has been um A I is dangerous, maybe even wipe out humanity. So they put in place, you know, regulatory burden and licensing requirements before you build A I you to report to the government and maybe, maybe even get a license and prove a save and basically put in place really heavy regulatory burdens um in, in, in, in, in my opinion, in a false name of safety uh that, that I think would really risk squashing open source. It turns out that if these lobbying efforts succeed, I think almost all of us in this room will be losers and there'll be a very small number of, you know, people that'll, that'll benefit from this. Um uh So there's actually a large community here in Silicon Valley that's been and and around the world that's been actively pushing back against this narrative. I think to all of you having the ability to have open source components to build on that as you control your own stack, it means that some vendor can't deprecate one version. This has happened, right? And, and then you have your whole software stack needs to be rearchitect and so on. Um And then to, to answer the safety thing, I feel like um uh you know, to me at the heart of it is um do we want more or less intelligence in the world? Until recently, our primary source of intelligence has been human intelligence. Now, we also have artificial intelligence or machine intelligence. And yes, intelligence can be used for nefarious purposes. But I think a lot of civilization's progress has been through people getting training and getting smarter and getting more intelligent. And I think we're actually are much better off with more rather than less intelligence in the world. So I think open source is a very positive contribution. And then lastly, as far as I can tell a lot of the fears of harm and Affairs Act, is it not that there are no negative use cases, there are a few. But in when, when I look at it, I think a lot of the fears have been overblown relative to the actual risk. So and I want to get to questions in a moment, but just to follow up on that, I interviewed you something like seven years ago uh at a, at a wh a conference and asked you about the, and those concerns. And I think you said um that worrying about like evil A I robots is equivalent to worrying about overpopulation on Mars. We're like, not even there yet. Um Are, are we on Mars yet in, in, in this metaphor? Like where, where, where are we in that progress? Yeah, at, at this point in time, so II, I feel like that super intelligent singularity is much more science fiction than anything that I was the I builders. No other building. I still feel that way. And you, you were saying that you, you've seen less of that type of talk like you were just a Davos um in the regulatory discussions, there's less of this like, oh my God, we gotta like stop this. We're building this thing. That's so amazing. It might take us, it might take over humanity um is not as much part of the discussion now, it's really dying down. So last May, there was a statement signed by a bunch of people that I think made an analogy between A I and nuclear weapons without justification. You know, A I brings intelligence to the world nuclear weapons, it blows up cities. I don't know why they have anything to do with each other. But that analogy just created a lot of hype. Uh Fortunately since May that degree of alarm. Um uh When I speak to people in the US government about A I human extinction. You know, people literally, I'm, I'm very happy to like see eye rolls at this point, right? Uh I think European Europe takes a little more serious than the US, but I, I just see the tenor dying down to talk more concrete homes. Like, you know, we want self driving cars to be safe, we want medical devices to be safe. So instead of running about the A I tech, let's look at the concrete applications because we, we look at general purpose technology like A I it's hard to regulate that without just slowing everything down. But we look at the concrete applications. We could say what do and don't we want in financial services, what is fair and what is unfair in underwriting? Uh What standards should medical devices meet? So good regulations in the application layer would be a very positive thing to even unlock innovation. But these vague fears and say, oh intelligence is dangerous and A I is dangerous that just tends to lead to regulatory capture and lobbyists having very strange agendas. Do you have any questions in the audience? I see some down here. Can we get a microphone? Um The the the gentleman and then the lady. Hi. Um Thank you for all you do for this community. I think your online courses are amazing. Um You know, all innovation follow some kind of an S curve of like and we're in this rapid acceleration of uh innovation around uh generative A I and machine learning. Where do you think the plateau is? And what are the rate limiters to drive us towards the plateau? Like how, how much, how much farther can this be pushed before we start to see ourselves hitting a plateau? And what, what's gonna limit that? Yeah. So I think, you know, large language models, they are getting harder and harder to scale. I think there is still more juice in that onion. But the exciting thing is um the core innovation of large language models, we're now stacking other S curves on top of the first one. So even the first S curse plateaus, I'm excited for example, about um A I or on device A I I run an on my laptop all the time. If you don't yet, it's easier than you might think. It keeps all your data confidential with open source A I, you can run on your laptop. Um um Its not about agents instead of you prompting A I, it responds in a few seconds. We now see A I systems where I can tell it dear. A I please do research for me and write a report that it goes off for half an hour and browse the internet. Summarize those links, comes back in half an hour with a report. This is kind of, you know, it's not working as well as I just distracted, but it's working much better now than three months ago. So I'm excited about A I autonomous agents goes and works for an extended period of time. Um We saw the uh uh unlock of text processing with large language models with large vision models which are at a much earlier stage of development. I think we're starting to see a revolution in image processing in the same way that we saw a revolution in text processing. So these are some of the other s being stacked up on top and then some are even further out. So I'm I'm not seeing a overall plateau and A I yet. So may maybe there'll be one but I'm not seeing it yet. Do you have a very quick question? Yeah. OK. Thank you. It's a great, it's a great dialogue and our sophomore at Berkeley spends more time watching your videos and taking courses. So, so you know, thank you again. So you mentioned automating tasks and also human intelligence. Uh These, the, the knowledge of the tasks are still owned by the humans in your dialogues with clients. Are you seeing resistance to unpack the task that humans do accurately so that you can apply A I to it? And if you are seeing resistance, what is the solve for that? Um Let's see. So I feel like I find that when we have a realistic conversation. So let's see when we work with corporations. So A I find we often work with corporations to brainstorm project ideas and figure out what can we help build. That's actually as an A I person, I learned that my slim lane is A I but all of these exciting businesses apply to that. I just don't know anything about the core part of what I our strategy is the work of large corporate partners that are much smarter than I am about the business domains to, to, to apply to. So one of the finding is that at the executive level, which probably, you know, who we work with the most day to day, there's not resistance at all, there's just enthusiasm. Um uh Maybe one unlock that I found is um I teach a class on Coursera genitive A I for everyone. Uh It was the fastest grain course on Coursera last year. But I did that to try to give business leaders and, and others a non technical understanding of A I and what they can and cannot do. And we found that when some of our partners take gens of A I for everyone, you know, that non technical understanding of A I unlocks a lot of brainstorming ideation. So that's the executive level kind of learn about the way ja I brainstorming, execute lots of exciting projects. And then um many businesses are sensitive to the, you know, broader employee based concerns about, about job loss. And I find that um when we have a really candid conversation. The fear is usually go down. I don't want to pretend there's zero job loss. That's just not true. But when we do the task based analysis of jobs, you know, hey, if A I automates 20% of my job to a lot of people, that's great. Uh You know, I can be more productive, focus more on the other 80% of task. Uh So on average, once we have that more candid conversation, you know, I'm thinking of this one time the union stopped us from even installing one camera. So there are some of that but, but most of the time it is a, it's a pretty rational and, and OK, conversation.
Info
Channel: WSJ News
Views: 101,636
Rating: undefined out of 5
Keywords: andrew ng, ai, ai tools, ai news, deep learning ai, deeplearning.ai, landing ai, landing.ai, landing ai founder, generative ai, ai effect on jobs, ai effect on workforce, labor force, google brain, deep learning, wsj, wsj interiew, cio council, wsj ceo council, ai hallucinations, machine learning, large language models, artificial intelligence, coursera, computer scientist, tech entrepreneur, techy
Id: -mIjwN1o7nE
Channel Id: undefined
Length: 31min 43sec (1903 seconds)
Published: Wed Feb 14 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.