Alexandr Wang: 26-Year-Old Billionaire Powering the AI Industry

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the Logan Bartlett show on this episode what you're going to here is a conversation I have with Alexander Wang now Alexander is the co-founder and CEO of scale a company most recently valued at $7 billion that helps companies use their data as an input into the development of artificial intelligence models Alexander started this company at 19 after dropping out of school and it scaled into one of the most important companies in the world of artificial intelligence today really interesting conversation with Alexander about the future of artificial intelligence including what the risk of catastrophic Doom is as well well as his concerns about the potential for artificial intelligence to create further inequality in society we also talk about his operational lessons including hiring people that actually give a about the problems you're solving really fun conversation with one of the world's youngest billionaires in Alexander that you'll hearing [Music] it Alex thanks for doing this of course thanks for having me so there was a phrase that was pretty uni ubiquitous about a decade ago that data was the new oil can you talk about why you reject that view so I think there's there's a lot that the the phrase gets right I think that's sort of like one framing is so if you went back like two decades the largest companies in the world were all oil companies um and so uh at that point um and less the case now but oil and petroleum were sort of like the um the Bringers of sort of power and leverage and mostly economic leverage um so I think I think the the way in which data is the new oil is that it is in by and large going to be the main lever for econ economic power and economic um sort of uh influence over the course of the next few decades um I think thing that that that it gets wrong is that data is is not a commodity in the same way it's not like not all data is created equal um in the same way as oil so you know oil by by definition is like you know it's like a this like scarce commodity um but but data is like it's far richer than that you know data has has multitudes you know you could have data specific to code or data specific to language or data specific to law and each of these pieces of data is quite different and therefore you know when you think about it strategically it's it's kind of a different um framework you have to plot you're not just going around hunting for you know data Wells and just like try to mine up and resell them you need like a thoughtful strategy by which you're stitching together useful qualitatively different data sources what is data is the new code mean and how did that serve as a primitive to the founding of scale the basic concept is that the you know what is the building block that enables the next generation of of applications right and and I think that building block undeniably for the past let's say 50 years has been code um code has has enabled many many Revolutions in technology um uh most notably the Internet and mobile and you know everything that's happened um and and code was that fundamental building block and I think as you as you peer forwards towards the eror of AI um in a world where you know models and algorithms more and more start to be what we interact with govern the applications we use like be the sort of like core primitive of our of the of our techn technological lives then data actually becomes the building block um and and you know the form of experience for me here was I was I was in college um at MIT right when uh Google released tensorflow and um and it was like the first it was the very early moments were like uh deep learning and large Nal networks were starting to become democratized and I remember using you know it was like I use the exact same algorithm to detect facial emotions as to detect you know whether or not my food had gone missing inside my fridge and nothing was nothing had changed just data um the code had all was all the same the algorithms are all the same you run the exact same commands in the terminal and it was just data was changing the performance of the algorithm and so the form of experience was basically if I if you think about like the next call it like 50 years of Technology that's going to be built what is going to differentiate one application for another and what's going to be what are those building blocks that you're going to compose on top of one another that's going to make an incredible an incredibly differentiated thing or something that like Delights consumers and that thing was Data um which which gets at the heart of I think the the importance of it going forward so that's the Insight can we walk through to a specific example what a use case was in the early days that kind of got you going around this yeah so the earliest use case was all um autonomous vehicles and so you know go back to 2016 2017 in um Silicon Valley uh the probably the mega Trend was autonomous vehicles and self-driving um and there were there were many companies being started a lot of the automakers were starting their own programs um there was the GM Cruise acquisition which was sort of maybe the starting gun for the entire industry and um and you know all of these all these autonomous vehicles One requirement to be self-driving is that you can fully see everything that's on the road you you that these cars can drive down the road and can see oh there's a person there there's a car there there's a bicyclist there there's a you know Cone Construction cone over there this is what the this is what the traffic light says you know fully understand the environment around them and to be able to do that they required they had to build algorithms that ingested huge amounts of data of you know basically tons and tons of examples um that where the algorithm could learn from which are basically in this scenario this is where all the cars were in this scenario this is where all the people were this scenario where all the pedestrians were and um and then train off of millions and millions of examples like that to build these robust vehicles um you know it's kind of come full circle now because you have in San Francisco you have self-driving cars driving around everywhere um without drivers in the vehicle and it's uh it's now finally become a reality what did scale play in that in that value chain of getting autonomous cars going like where did you fit in versus where Cru stopped or way Mo or whatever the right example is yeah it was specifically in this in um you know uh this data refinement stage where the cars would collect huge amounts of data they would drive around you would get tons of footage um video footage lighter data radar data all this um all the sensor data Al together but in none of that data were their actual examples marked of this is where a person is this is where a pedestrian this is where a bicyclist is this is where a car is and so the algorithm had nothing to learn off of so what we did is we went from raw data to um what's called labeled data or or high quality data from machine learning applications where all these examples were marked so that that the model could actually learn you know in what situations what does a person look like what does a pedestrian look like what does a car look like Etc um and uh you know one of the things that we like to say um while I disagree with the framing of the data is in new oil if data is the new oil then scale is the refinery um and we sort of under went underwent this process by which you would convert large amounts of raw data to very high quality data that can then power your algorithms and and why was that a problem that they wanted to Outsource to a third party rather than bringing that inhouse and building that competency out themselves I think in general if you look at the the overall AI industry the sort of like large scale um uh building blocks or the large scale ingredients for it end up being just such big problems that that that companies deserve to be built to to occupy those infrastructure slots so another way to think about this is like um you know when I was starting scale very inspired by stripe and AWS these sort of like large scale infrastructure companies that um felt very Visionary because they basically they realized that there were like the same problems that every company in a sector every company um in the startup industry were going to deal with and they basically took those and built just almost like consumer level experiences for the developers um and and built them to a point where it like so easy to use and the economies of scale were so clear that they just became the defaults within the industry so if you look at that for AI or for for um for machine learning um there were there were kind of three main ingredients there's uh compute so gpus and other other chips um to power the you know incredibly data intensive and computer intensive algorithms um and as we've seen almost the entire industry outsources to Nvidia at this point um there's Talent uh which there's no way to Outsource really but Talent is this place where you know these companies obviously are spending huge amounts of money um engineers at these at these firms are making millions and millions of dollars they have teams of of hundreds and hundreds of people so they're spending on the order of billions of dollars um you know on the talent full stop and then there's data and the each of these three ingredients were such you know there's such big pieces of the overall AI component Tre um that if there were companies that could solve them in a very high class way and a very high quality way you know they they were going to be used like infrastructure they de the industry demanded you know in infrastructure layer um at that at for each of these components so that's really the way I look at it I mean I think that like um there's for each individual company they have this option which is do I build it in-house or do I use the the sort of like industry infrastructure and most companies take an approach which is like there's a few things which where make sense for me to build things on my own to differentiate myself but I you know you have to accept that you're going to do those things like generally speaking less efficiently than the industry standard because of the economies of scale and the network effects that the infrastructure providers have so so you got around that and obviously your use cases have it expanded today we've seen what generative AI looks like companies like um open Ai and anthropic and many others so where um where do you all play in the reinforcement learning human feedback Paradigm like what can you apply the similar Primitives that you got going on there to to now this world of generative AI yeah so I think one of the craziest things about modern-day AI is that most of the capabilities of these models are taught by data you know it's not you don't have the AI system you still don't have ai systems that are just sort of like learning on their own and just like you know randomly get you know demonstrating these very human skills they're they're taught to them by um by large scale data sets and human data so um what we do we build what we call a data engine which is basically you know similar framing kind of the refiner for raw data in the ecosystem but that that data engine Powers every leading llm in the industry today effectively you know um basically every large language models is powered using scale sta engine and the specific um technique or the specific approach is what you just mentioned reinforcement learning with human feedback can you explain that for people that maybe don't know that term yeah so so this was a um a technique that uh we actually worked with openai back in 2019 on the very first experiments of but the the basic approach is um is that you teach a model what good looks like um so you teach a model how to assess whether or not um one answer or one response is better than another um and it learns that through a bunch of examples um where human experts are sort of teaching it so a human expert will say this one's better than that one and here's why and the model can learn off of that and then know what good looks like and so then when it by the by the time it gets to actually producing results it um has an internal sense of what good looks like and what bad looks like and and what's better than another thing what's called a reward model um and then it does what's called reinforcement learning it basically uses that internal sense of what good looks like to optimize its own its own responses and what that means is that this allows the models to actually exceed human performance in a lot of cases because you know it's kind of like um how every human in the world can be a movie critic but but almost none of us can make a movie um so uh you know each of us can say ways in which a movie could be better or could be improved um and uh but but you know obviously I can't make a movie um in the same way if if humans can teach the model what better looks like and how to improve then the model can keep improving even far beyond what human capability is you're in a such a unique position to see how customers are leveraging AI um are there any interesting anecdotes or observations you had in the last couple months or year whatever it's been about Enterprises big companies leveraging both you and one of the model providers as well to do do something that you can speak to the really interesting um opportunity for Enterprises now is that if you look at the models the best-in-class models that are that are built today they're trained off of predominantly public data so predominantly data um from the open internet um but if you think about the total data that's available the total addressable data let's say um 99.9% of that is actually private proprietary data of some form one way to kind of like Benchmark this is like of the words that you type that each of us type what percent of those end up on the open internet like a vanishingly small percentage most of it is in messages or emails or you know memos these things that you know will never end up on the public internet unless you're subpoena or something um so um so the what that means is that like most Enterprises whether they know it or not are sitting on troves of data that far exceed um you know the amount of data that's accessible in these other in these other formats are on the public internet and so much of the opportunity for Enterprises is figuring out ways to take great base models that are trained off the public internet but then intermingle them fine-tune them and and specialize them on top of their own data on top of their own business their own customers all of that context to prod things that are sort of like quite uniquely theirs and proprietary and and and generally differentiated because of you know all this data that they've amassed in the past um so broadly speaking that's what we think that the this is what we this is the the direction the world's going to go is Enterprises are going to be able to build models on proprietary data that sort of have unique capabilities and the um the exciting thing that's been happening over the past few months is um is our work with openi and others uh and other model providers you know we partnered with with llama 2 met on llama 2 as well then taking these general purpose models and fine-tuning them on top of um on top of Enterprise corpuses um and so we've built a platform uh EGP which enables this basically um enables Enterprises to you know take their own Enterprise data um fine-tune it on top of whether it's GPD 35 or other or llama 2 or other base models over time um and build things that are sort of like uniquely capable for their own for their own use cases whether it's for customer care and support or for legal applications or for you know um uh development and their own development capabilities and I think that this is um you know it's incredibly exciting because it's a way for Enterprises to get the both Best of Both Worlds you know all of a sudden I'm leveraging all of the incredible development that's happening in you know among this small handful of foundation model providers while also adding something to it that makes it sort of uniquely mine um so I think that this is the this is the Paradigm of the future for Enterprise um you know uh there's obviously a a long way to get there but I think this is very clearly what the future is going to be let's say you're an executive or a Founder at a startup or an Enterprise in some decision-making role in the core business isn't related to artificial intelligence what what should you be doing right now or what recommendations would you have for someone that isn't at a for 100 that they have people specialized in thinking about it but the average executive or founder like how do you go about discovering what you could potentially use artificial intelligence for scale for open AI for you should basically first go through and catalog okay what are my unique data assets um and and think through like hey if let's say you know one mental model to use is like let's say there were there were a person who you know was just a superum could like read through all of that information um more quickly than anybody else what are the things that that person would be able to do better than anyone else in the world right and um that's not like a that's a pretty rough approximation what this looks like for the models the models um are better at storing information than human brains um and you know are not as time limited As Time limited as human brain so they can read through everything and then what are the unique capabilities that you get from some some from A system that is able to have done that and um so I'd go through that mental exercise and I would think about okay what are the what are the unique things that I can do from there so both cost reduction you know customer care is pretty clear example of of cost reduction or optimization um or uh or what are the offensive things I can do and then and then I would just seek to build those out with knowing AI Partners you know ourselves openi anthropic um you know these these comp that are seeing the entire ecosystem play out um and they're basically raced to do that because they think you know I certainly uh believe that you know not all businesses immediately but in a pretty short time frame it's going to be very clear which businesses have embraced Ai and which ones are sort of still uh not running on on the models and it's going to become very evident from consumer experience as well as Financial how do you how do you compare the the Advent or the last five years of artificial intelligence to pass trends like the personal computer internet smartphone iPhone whatever it is how in your mind the like societal impacts GDP lift productivity gains whatever the right framework is how do you think about it I mean my honest take is it's going to be bigger than all of them um but you can sort of like look at it a few from a few different lenses I think at minimum um AI is clearly a new consumer Paradigm and it's a new um it's a new way in which people will expect to inter with technology and so in that way you can sort of like say it's at least another mobile um in the sense that uh you know mobile was just this sort of like this mobile and personal Computing these are massive changes in Paradigm and accessibility of uh of a lot of the based Technologies same thing is happening here with AI chatbots are very clearly an extremely popular um delivery method for technology and and so as a baseline you can sort of like Baseline it as like new consumer Paradigm for technology um but the the upshot is like AI has been a very hype technology for a long time for good reason it is um the Holy Grail of unlocking human productivity uh because you know take the framing of of productivity which is you know let's say roughly speaking GDP per capita what's the economic output divided by the number of human uh human heads you have well all of a sudden if you have technology so algorithms or AI systems that can start doing pretty meaningful chunks of what would otherwise require humans and people you have a potentially you know ridiculous unlock on on productivity another way to look at this is if you take all of us GDP it's roughly 27 trillion dollars of US GDP um software and IT services is about two trillion of that so everything that you know uh you or I spend all of our time on is thinking about that $2 trillion of of uh that $2 trillion bucket of the overall spend which is not nothing um but it's it's not even 10% right uh 16 trillion of US GDP so more than half is in Services um the biggest buckets of which are are healthare and the second and the next biggest bucket is financial services and the potential disruption of this $6 trillion do of of services GDP I think is um that's the that's the T that's the potential of artificial intelligence that's where you can totally can potentially transform to be you know 10x more productive 10x better um for the consumer 10x better for the you know 10x economically more efficient um in every way and that's just sort of like you can't imagine an Economic Opportunity bigger than that in many ways I think it is like it is the biggest economic wave until obviously the future there's like some some new technology that has like the ability to be as impactful um and and the you know the key question you would ask is okay so to unlock that you just need to believe that the models will keep getting better pretty quickly because like no matter what if the models keep improving at the rate they're improving now you know we're going to end up in that world where the the opportunity to disrupt the economy is just like totally unprecedented and you know I think we don't as a as an AI Community see that slowdown happening anytime soon so we're just in in the midst of potentially one of the the greatest economic engines of the world being invented and that's you know I think will be one of the most special technological changes we see you've said the next two to three years of AI are going to define the coming two to three decades of the world what did you mean by that was that related to a lot of this productivity gain stuff was that geopolitical in that comment I generally take um take the St there there like there's like two ways to look at the world um in terms of let's say the balance of power the balance of countries and whatnot I think um you can look at it from an economic standpoint and you can look at it from a from a hard hard power standpoint so um probably most of the history of the world uh before World War II was was dictated by hard power and then most of the history of the world for the past um you know 80 or so years has been dictated by economic power uh and you could you would certainly ask the question which is going to define the you know the next 80 years but at minimum it's like one of the two so um so if you take that framing so I think one of the things that's that's quite shocking is the next two to three years of AI development you know everything we've seen over the past um three years of or four years of AI development is is shocking in 2019 with gpd2 gpd2 couldn't count to 10 it would spit out gibberish English you know it was totally unintelligible and then now four years later gbd4 is probably more convincing and and eloquent than like most most people in the world um and uh and that happened over the course of four years and a roughly you know on the order of thousand X scale up of the models gbt2 is roughly two billion parameters gbd4 depending on us is somewhere between trillion to two trillion parameters it's been across a you know roughly THX scale up we've seen just this like transformation from like you know worm level intelligence to to to um something quite convincingly human um the next 2 to three years you know many companies are on the record for going undergoing another 100x scale up so you know these people will go from spending hundreds of millions of dollars on these models to tens of billions of dollars on these models and my expectation is that that's going to deliver you know very very powerful algorithms that have the ability to impact both of these spheres both economic power and and hard power so okay let's let's say we're in this like you know takeoff scenario of the technology um economic power I think the case for economic power is pretty clear if you believe what I just said around it being being the most important thing for uh for Global productivity or for economic productivity then whoever gets there first whoever integrates into their economy fastest um whoever um is able to to actually like Leverage MBA first whichever country or whichever Society does that first is going to have this meaningful leg up from an economic standpoint and then from a hard power perspective you believe the technology um is is of the similar vein as the atomic bomb which you know we can certainly dive into but if you believe it's that kind of Technology with the ability to you know both deter and project hard deter conflict project hard power to that degree then um it's also going to fundamentally change um you know the balance of military power so it feels to me like no matter how you slice it this technology while today we think about as like a chatot is like at the core of you know the the balance of power globally for the next you know 50 years what part of the atomic bomb analogy do you agree with what part do you reject you obviously it's near and deer you grew up in Los Alamos so you have something familiarity with h with elements of it like what what do you believe about that comparison versus not there's a bunch of interesting nuances here so the atomic bomb um was was obviously primarily a weapon of War so it was you know it is weapon and it's something that pretty clearly you know we as a we as an entire world could pretty quickly agree like we didn't want to use that anymore um and so it very quickly became went from you know after a few a few uses to being this like very clear deterrent for conflict um and this this huge stabilizer for the globe the difference with artificial intelligence is that no matter what we're going to have to use the technology for economic purposes so there's no scenario in which you know the countries of the world are going to get together and say hey we're not going to use AI anymore um and artificial intelligence is a pretty difficult technology to detect the use of so part of the issue with AI is um you know Russia could be using it for cyber attacks today and it'd be very hard for us to actually like you know know that that's what they were doing it's almost impossible to hide the fact that you used a nuke right so because of that it makes it pretty hard to set the right International standards around the use of the technology the fair use of the technology you know how you know set standards around how terrorists can should use the technology and so I think that makes it um that presents challenges as a world which is that like this is a hard technology to keep in any sort of box um unlike unlike nukes um the ways in which it's similar I think are that it's it's a technology that that has a very steep technological curve and has very clear benefits to scale so to the degree that the United States um Can can be the leader uh or or a few a small set of democratic countries can be the leaders in this technology um then I do think it has the potential to be a huge deterrent towards other countries that are sort of behind on that curve um and that's been that's certainly been the case with atomic weapons and nuclear weapons do you worry about the catastrophic risk scenario of that fast takeoff and uh anything that's more nefarious with AI itself rather than being used by PO intensities for things I don't know buying weapons or whatever you want to compare it to like do you worry about it in of itself my my taxonomy on AI risks is sort of like there's three buckets the first bucket is um the AI qua AI risk so you know the AI itself becomes a threat to humanity um that's personally speaking not the bucket that I am most um worried about or concerned about um and I can you know I'll speak more about that there's the AI misuse category so um you know authoritarian countries or terrorist groups misusing the technology um I think it's a very real risk I think it's like the most real risk that we have and then there's the last risk which is sort of like a you know a second order effect which is with massive Ma with massive labor displacement you'll see you know all sorts of political instability domestic instability um populism so you know these kinds of Trends uh in in many developed countries so the misuse one I think is the is is very real uh I think it's it's uh we're seeing overall an increase in terrorism in the globe and I think that um the potential for misuse of the technology is very high again for cyber attacks for Bio attacks bio Weaponry um for uh you know information Warfare um and even stuff you know the the version of this I think is like almost the most direct or clear is like um you know there's these companies like character Ai and replica that where you you can have an AI model that becomes a genuine companion to huge percentages of of the citizens of of various countries if you had a foreign run and foreign operated AI companion company I think that's like the most effective intelligence agency that you could possibly have so there's there's like a lot of there's a lot to be worried about in the in the realm of AI misuse um and is something that I think is you know that's certainly like very concerning and something that we as a country we as a society need to think about how do we how do we mitigate those risks there was the um executive order from the bid Administration I think we're certainly think thinking about those hey guys Rashad here I'm the producer of the Logan Bart show and wanted to take a quick second to make an Ask we are close to 10,000 subscribers and are trying to get there by the end of the year if you're enjoying this conversation and these episodes please consider subscribing to the YouTube channel now back to the show what's something you believe inevitable about artificial intelligence in the next five years that maybe isn't mainstream or the average person wouldn't fully appreciate I think there's a bunch of a bunch of things I'll I'll mention I think one that um you know most people in AI see and believe but but certainly is not uh is not super is not yet fully mainstream it's just that these models are going to become very quickly um some of the the the largest investments in most countries um so you know if you believe these go from hundreds of millions of dollars to billions of dollars tens of billions of dollars to hundreds of billions of dollars I mean there's not that many countries that can afford a hundred billion doll investment either funded through Private Industry or funded through through uh the public sector through the government itself and so this very quickly becomes a um one of the largest economic projects or sort of like scientific projects that the world has seen which um which I think it's maybe surprising to people that it isn't that yet you know these models you know they've cost hundreds of millions of dollars but a lot of people can afford a few hundred million um very quickly it's going to be almost like particle accelerators or like these like massive scientific projects in terms of scale of investment I think the other piece that that many people don't think about or or you know I think is just going to slowly blend in is that um the percent of time that humans are interacting with other people versus to a model directly that split is just going to keep accelerating the direction of the model so um you know there's no there's truly no reason outside of Regulation you would believe that the percent of time that that percentage the percent of time I spent percent of my total time I spent interacting with models is going to decrease at any point for the next few decades so that's going to increase monotonically it's already pretty high for me I I interact with chat GPT quite a bit already um and I think that's a very weird sort of sociological scenario for us to contend with contend with which is no matter what these models are going to start you know eating into all the time you spend talking interacting with other people you know if you believe that the AI that the models are only going to get better if you believe that they're only going to have like more interesting data if you believe the products are going to get better the the monotonicity of the Improvement is going to be very weird to think about and you know maybe these don't happen the next five years maybe these happen over 10 years 15 years you know who knows when they happen but at some point you know people are going to spend more than half their time talking to models versus Humans there was once kind of a concept of belief that it would be lowlevel sort of manual jobs that would get automated through artificially intelligence I think increasingly we're finding that what these models are good at are entirely orthogonal to our understanding of what is difficult versus not how do you think about that orthogonality and what AI is good at versus what it isn't and it's off to the off to the side yeah I think this just all boils down to data availability so um going back to it right like data is the lifeblood of all these algorithms everything they learn everything they are capable of they've learned from data and so it turns out that you know by using the internet over the past few decades um uh and by commenting on Reddit and you know uploading stuff to the internet we've happened to have been creating the largest data set of um of human behavior ever so anything that like we did on a computer which most of it was was fundamentally knowledge work or knowledge related or or intellectual because you know by definition it's abstracted away from the from The Real World um that's what the models have a lot of data on so they have they have remarkably little data of you know what it's like to pick something up or what it's like to throw a ball or what it's like to manufacture something or you know all the things that that are embodied in the real world it has very little actual bearing on um and very little data and um and that's going to be true for a long time you know the the the um the sort of like digital presence of these models and digital intelligence is always going to surpass for for probably perpetuity will be far more advanced than the sort of like phys physical embodied um capability you know if you think about it from a just like data Avail availability standpoint I think makes perfect sense and obviously where it gets really weird is you know the the economic impacts of this um and the sort of what does that mean for you know the future of of of Labor you touched on the three components of model development I guess being Talent compute and uh and data what do you think the the most limiting factor is today and what do you think it will be in I don't know five years time or 10 years time I think data and and compute are are definitely the limiting factors today and you know um compute has a very clear uh limit because of manufacturing capability so the supply chains for the for both of these I think are worth or worth diving into so um so the you know 100% of high-end gpus that fuel these models are manufactured in Taiwan today um there are these Fabs that tsmc has has put tens if not hundreds of billions of dollars into capex to build and and and uh continue to refine and improve and that's just like a very strong upper bound limit for um the the compute capability and capacity for these models um so so by definition like if you believe in like continued exponential scaling it gets pretty hard unless you have like an exponential scaling of the supply chain as well which is something that again economically speaking is not really um is not really feasible today so compute is is both both the pinch point today obviously we see how much Nvidia chips sell for and how you know how much startups want them but is also just a clear limiting factor to the exponential growth scenario um data is as well so so I think um you know a lot of uh a lot of people observed that like you know are is there more is there more pre-t trading data out there have we run out of high quality tokens and um and there's certainly like some very uh Lucid arguments by some folks that show that you know some of the scaling laws will be tough to keep up because we just don't have that much more high quality data on the internet and you know this's this argument is video data high quality data is video data not high quality data you know these are the sort of questions text is a very um unusually compressed form of knowledge and information video is much less um much less compressed so then if you don't have enough pre-training data where a lot of this gets made up for or or where a lot where there has to be a big scaling to make up for all that is in um rhf and posttraining data and so I think we're going to start seeing again similar kinds of bottl um where the amount of human experts who are really what's needed to fuel this sort of like rhf stages you know human experts become gpus in in uh in their own right that you know basically the number and quality of human experts who are fueling model Improvement is going to in and of itself become another supply chain Bott for the industry as we've looked at at gpt2 to three to four it sort of it seems to the outside end almost like linear development uh that we're actually that that's going on but clearly these are more stair step step functions along the way do you think with the constraints we have and what we just talked about we're going to hit some Plateau at some point that's going to require a much bigger unlock of one of these things to really reach that next major step function when you talk to people at the leading labs they spend all their time thinking about the supply chains for these models um so I think that implicitly you know if nothing happens these will be really big bottlenecks but that being said I think that like you know this is potentially the greatest human engineering project um that you know we've ever seen and so I think we're going to figure things out um I think what that means is you're going to start seeing some pretty crazy um pretty crazy actions to try to secure and ensure that the supply chains can continue scaling but again I think that's that's kind of the the technological imperative that we operate in do you think we're under appreciating as a society the uh Reliance on Taiwan and the political position that Taiwanese find themselves in there and what that means for artificial intelligence for us one very clear um indication of the degree to which like we don't appreciate it is just in the uh the multiple gap between Nvidia And tsmc tsmc Trades a a dramatically lower multiple than Nvidia um Nvidia is a higher margin company of course so you know some of it's you know very well well deserved but uh tsmc I think you know from my uh talking with public market investors they get dinged because of this geopolitical risk you know what happens um Taiwan is just at the the sort of like this pressure point for for the world what's your perspective on open source versus closed Source models it seems to be a big debate these days do you do you have any opinions on that as a company and and my personal point of view is to be uh quite agnostic to how the technology develops I think that AI is an incredibly powerful and good technology um I think that you know all development on these models is is great and as long as you have safe open source development as well as safe close Source development I mean both can be done poorly and unsafely and both can be done safely and well and if you have safe development on both it's it's great I think that open source models are probably a requirement to ensure that AI achieves the full economic impact that it can have um there's a lot of scenarios where you need like you know you just don't have very much compute you need a small model running somewhere that probably needs to be an open source model of some form um doesn't make sense for there to be some small closed Source model just to fit that need and so I think I think it's good for you know economic growth and economic Prosperity that we have open source models I've heard you talk about the competing curves of AI can you talk about inequality and the competing curves CES of scale and democratization a little bit more because of the scaling laws um you know as the models become ridiculously expensive to train you know tens of billions hundreds of billions of dollars potentially even trillions in the future that very clearly limits the accessibility to the underlying technology just in the same way that like none of us have access to particle accelerators and um you know that is like the the the poster trial non-d democratized technology is a part of accelerator um you know there's there's this Avenue where the where where uh that becomes the version of the world so that's that's very clearly going to happen and that's like one major sort of like tent pole for for how the how the technology develops then the other one which you know there's so much will and and might within the community to accomplish this is how do you push all these models down the cost curve so quickly that it's like a few years after you have like these incredibly powerful close Source models you have very good open source models that just get that where the cost proof can be climbed down really dramatically quickly I think we're seeing that in open source models I think we're seeing like um gbd 35 level models um that have happened very very quickly and actually are very small you know I think there's there's some recent things that show that you know these these 10 billion parameter or even smaller models can perform at the level of ch35 so so this um this pretty rapid Improvement of of the Democracy I I think basically you have this so one curve is the scaling and the other curve is the speed from uh Frontier Model result to democratization of that technology and these are sort of the the push and pull of the entire industry what is the Turing trap and why is that significant in your mind yeah so the train trap comes from this um comes from this great paper that this um Economist uh Eric Bolson um Professor Sanford and others sort of sort of wrote and um and the basic premise is you know the the starting condition of of AI you know this the you know many ways the invention of AI came from um this concept of the Turing test which is you know at what point do you have an AI that can fully imitate a person and because of that framing we've thought about AI as a um a replacement for humans um predominantly so we think about like you know uh when we have ai it's going to replace humans in the workforce and that will be its impact on the economy which is you know as as uh Professor brolen um argues is a trap um because what's actually going to happen is you know you're going to have ai systems that sort of like slowly walk up the capability curve and as they come in most of the value is going to be generated from sort of basically hybrid human AI systems um and it's going to be like through some very interesting and complex and nuanced interaction between human capability and AI capability that you're going to get these these very economically valuable things uh to occur and because of that AI uh in most outcomes or in most versions of the world actually ends up being a pretty strong um Creator or sort of net creator of more jobs or net creator of more demand for human labor um and that's I think the one of the very important messages which is there's this perception that AI will just take all of our jobs no the answer is like AI is going to create a fundamentally different economy which has like a fundamentally different mix and kind of job but that will probably net create greater Demand on human labor there's a lot of um the world's obviously complex and there's a lot of complexities in nuances associated with artificial intelligence that being one of them is there another one that's a general misconception that people have that you would like to clarify or express your opinion the dissenting opinion of I think one of the major things that people when they think intuitively about AI that they kind of get wrong and I think this is um you know I see this I see this in a lot of places is sort of this um it's a very easy te technology to say you know you have an AI you know use gp4 you realize oh it just hallucinates all the time and then you sort of like throw your hands up like oh this technology is fundamentally limited and it's never going to go anywhere because it hallucinates um and I think the the tricky thing about AI is like very hard technology to bet against because every prior instance where you would like use an earlier version model if you used gbt2 and you said ah this thing can't count to 10 you like threw your hands up it's like there's no future here or gbd3 you would use it and it's like it uh it can't solve a simple math problem or like you throw your hands up and it's like this isn't going to go anywhere um I think a lot of people even in the AI industry uh fundamentally don't actually believe in model Improvement um and it's a big it's a shame honestly because I think um the reality is the model are going to get a lot better um and I think it's hard to imagine how the models will get a lot better but they will and uh and we need to be thinking about a world where we're just on this continued track of model Improvement you're a student of geopolitics and how artificial intelligence I guess plays in that so much so that you recently did a a TED Talk um on the subject can you speak to the battle that you see playing out in artificial intelligence uh within the geopolitical world that we're in in particular China and the US so one of the ways in which AI has been surprising is the degree to which it's become a clear uh objective and imperative for many many countries and many geographies around the world so obviously most much of it was invented in the United States um at Google and open Ai and de mine Etc um but very quickly now you look you see um China is obviously trying to move very quickly you know that Chinese Tech Giants have bought an aggregate of over5 billion worth of Nvidia chips um that's a lot of chips uh you see um the UAE uh particularly but the UAE and and Saudi moving very aggressively into the Technology Building large data centers um the UA has open source two successive open source models um one of which is 180 billion parameters you know these are very uh you know big and serious models that they're building um in Europe you're seeing uh some of the best open source models coming um from European companies European startups and and you know from my conversations with people from many other countries there certainly uh there's many others who have like clear aspirations in the in in Ai and so at minimum it's becoming this technology that a lot of countries are looking at as like hey this is this is really important for our future um and uh what's what's really like more um concerning is the degree to which the degree to which certain countries particularly China are very cleare eyed about the the Monumental impact this technology can have you know one of the you know um there's a number of pla people uh the the the Army the dod equivalent of China there's a number of pla documents that talk explicitly about how Ai and other breakthrough Technologies could allow uh the pla to Leap Frog at the adversaries most notably the United States which is the the most powerful military in the world um because you know we're going to over invest into our Legacy platforms and just upgrading our Legacy platforms versus the new breakthrough Technologies they'll over invest in the new breakthrough Technologies and they could Leap Frog us just like China Leap Frog the United States in fintech and Payments Technology um where we pay and all their digital payment infrastructure is you know most people believe surpasses the the sort of like state of of payments infrastructure in the United States this is the question the question is um what is the you know 40% of global GDP us us CH US GDP plus China GDP is 40% of global GDP so these are like the the two behemoths in the economy um and the key question is is AI the the Catalyst for China to overtake the United States or or at minimum dramatically gain ground versus the United States or is it the technology that allows the United States to ensure that you know we can maintain global stability as um via by uh by persisting and continuing PX Americana if you talk to a lot of political scientists there's a pretty um a pretty clear consensus that if if chinesee military capabilities catch up to that of the United States that's a very unstable World um you know you know whichever side you're on that that definitely results in Greater levels of global instability um because you know the a lot of the global stability or or one major portion of the last 80 years of relative peace has been because America has been the clear hard power um superpower in the world um the if you have two superpowers you get a high level of of uh you get greater entropy in the system there's more proxy wars um there's more um there's more overall instability there's more war there's more death so I think that the the you know in this like broader battle between democracy and authoritarianism and and sort of the these like different government systems and these different ways that the world can organize um AI is one of the major chess pieces in that game and uh and that's why I think it's critical that you know we as Americans or American General is able to maintain that PO position and maybe speak to the proportionality of what China is spending versus the us today for the past few years at least um China has been spending the pla um the the Chinese military has been spending between roughly one and 2% of their budget on uh on AI Technologies and then in that same time period the US DOD has been spending 0.1 to 0.2% of our budget on on um on AI Technologies so uh what what the pla had been forecasting is actually playing out in reality right now which is we you know we're over investing into just our our Legacy platforms our Legacy Technologies under investing in the breakthroughs they might they might reach a breakthrough before us and we might be left you know uh in a situation we don't like by the time people hear this you will have already been to the UK I AI Summit um I think you did a great job there by the way I think it was really really really well done um you're heading out tomorrow why why is intending attending this important to you uh and what are you hoping to accomplish there's a few threads here that I think are interesting I think one is um is ensuring that there is a there is a track for Global cooperation on AI um I think regardless of what you believe if this technology is as important as I think it is as many think it is um it's something that that requires um many many of the countries in the world to have a to have a clear and open dialogue around you know you don't want anybody going off track and doing things in in a way that is opaque to the rest of the world that's certainly a driver of of instability so I think at minimum there's a huge amount of of uh just intrinsic value to the World by being an open dialogue between all the countries to discuss the technology and many of the countries are going to be there which is great and and kudos to the to the um UK government for for creating such a forum I think that the the other piece that's that's that's critical is ensuring that that we're thinking about the right risks of the technology I think we we talk a lot about the the frontier level risks and the sort of like um some of the existential risks and I want to make sure that we're also thinking a lot about the risk of misuse and what are we doing about those and how do we think about those um so so so I think I it's important to me to ensure that we have a a broader view of particularly the geopolitical risks at play um and uh and um and ensure that shapes the sort of global dialogue around the technology what role do you think the government plays in regulating AI yeah I think it's a it's a it's obviously the question of the day literally with the executive order coming out um um so far the approach has been to take a very um quite a light TCH on regulation of the technology particularly because we're in such an early stage and one of the worst things that you can do for a technology as high potential as AI is to squander the opportunity early on by over-regulating it so I think that's been I think that's been smart I think that the key is um the government needs to ensure that the misuses of the technology or the ways in which the technology can be used to create meaningful consumer harm or meaningful harm to um the citizen base that those don't happen or at least that those are you know very highly punished and and limited very limited and difficult to do in some way and so to that end I think that the and this was a key part of the of the um executive order one of the the most important things is ensuring that there's the proper testing and evaluation regime for AI systems so how do we as a society agree that certain AI systems in use cases and applications are fit for purpose and ready for Prime Time versus you know totally inadequate and there's versions of this that exist in all sorts of ecosystems so you know um the FDA uh approves drugs we don't you can't just buy like random molecules off the internet and um and ingest them and expect that to go well um the uh the there's there's similar kinds of regulation on planes obviously and cars and you know these these technologies that are potentially very dangerous um and even the even Apple does a version of this for uh for apps in the App Store you you have to be approved by the App Store so I think this is the key this is the key question and in my conversation with folks in the white house it's like this is the this is the industry that needs to exist that that doesn't exist and you know we at scale were trying to play a big part in this topic we worked with the White House and Dacon on some of the first public evaluations of these models um a few months ago uh and and our view is that you know you need to have a pretty clear uh regime of of you know testers in the private sector with pretty clear regulation and guidelines given by the public sector and a very clear optin from the model providers and those implementing AI technology I want to back up to the founding of scale and transition from some of those broader topics so what what was the original Insight behind the business did you recognize at that time that sort of led to its founding yeah I think that the um the key Insight was that uh simply put it's like if AI were going to grow the needs on data were going to grow exponentially so I mean it's pretty um and you know I had no idea at what time frame those was going to happen or when those was going to happen or at what scale size and magnitude those was going to happen but I pretty strong conviction that um neural networks and AI were going to be more and more ubiquitous and if you believe in that you believe there had to be you know infrastructure for data to to um meet that challenge and and sort of like meet that growth um that certainly played out I think even in a in a way that's that's been surprising to us which is that the the amount of data required for these AI systems and the sort of like hunger for new data has has far exceeded I think what I originally even would have conceived possible by this time frame and so you spent how long at quora before going to school I grew up in uh in Los alos New Mexico um parents physicists at the lab um a lot of physicists at that at that lab um and uh and then went to worked at quora for about a year that sort of my Fay and taste of what technology was like with the tech how old were you when you were working at cor 17 okay work there was 17 and uh and it was pretty eye openening in the sense that like you really get you know there's like the Steve Jobs um quote that I think every every every new employee at Apple hears about which is like you know you sort of realize that everything around you is is uh built by people no smarter or no more capable than yourself I mean um my colleagues at at a qu were brilliant but it was like crazy to think about like this was a site I was spending a lot of time on as a teen and it was built by you know a team of 100 or so people and uh and and it was it was just like very empowering experience then I went to MIT started training neural networks of my own and uh the rest is history and so you went to MIT and you sort of uh got bored with uh the learning aspect of Academia and wanted to go be a practitioner in the field is that fair yeah I think that like the um I think one thing that kind of stuck with me is that the you know it was already playing out at this point in in 2016 my started scale which is that um it was pretty clear that the amount of resources that you would need to fully accomplish AI or to see the you know see AI through through the fullness of time we're going to vastly exceed what was available in Academia and obviously that's true in an almost ridiculous degree now with you know hundreds of millions to billions of dollars being used to train the models but um but that was that was probably the key the key driver what inspiration have you taken from Amazon with operations and Technology being combined for scale yeah a huge a huge amount I think Amazon um Amazon in many ways is one of the most um counter cultral tech companies uh in in certainly the world I mean I think the key the key Insight of Amazon or they have many key insights but one of I think the key insights of Amazon was that um you know operational excellence is actually a huge driver of of of tech Surplus and and Tech value um and so uh so Jeff wilky who ran um consumer or operations there for many years and was a CEO of worldwide consumer um so everything outside of AWS is a very close mentor of mine and um you learn pretty quickly that there was a a way of thinking there that is sort of like you just don't see at any other tech company which is a deep Embrace of operational complexity um and and operations as a discipline a deep Embrace of the sort of like marriage of technology and operations to produce things that are sort of produce linear combinations that are sort of uniquely powerful and uniquely capable and um and a extremely pragmatic approach to the business decision-making um and those in combination created I think one of the you know greater economic engines of our time so learned a lot and a lot of what we do at scale is is taking that same same approach in Playbook and and sort of philosophy which is how do you marry operational complexity with with fundamental technology breakthroughs to drive an entire industry forward and Amazon's also been kind of canonical in parallel execution as well is that something that you guys think about when executing across the suite of different products you'll offer yeah yeah and the the key like the the beauty of that Insight or the key of that Insight is that you know you figure out how to architect problems such that you have as few dependencies as possible and so you have you have as many things you can sort of like Bet On In Parallel at once um which is something that investors I think um obviously understand quite well and if you have enough independent bets then um then you can sort of double down on the ones that work out and it ends up working quite well can you talk a little bit about I've heard you reference the dichotomy of how businesses are rewarded for predictability but actually benefit from elements of random Discovery maybe using Amazon as a as an example yeah so so if you think about Amazon um as a as a company um it was an online bookstore and then it was the everything store online you know the online everything store um they created Prime so they created this membership program and then it became the largest data center provider in the world um and that last piece sounds so non seiter if you like tell it that way that it just it almost seems like it's it's what like a a bad author would write into a book um that like you know you had this you had the everything store and they were so big and bad and then they like ran all the computers globally like it just sounds so um unbelievable and now if you look at Amazon's market cap you know depending on who you talk to most analysts attribute the vast majority of the value of the company to um to AWS so um this this very unpredictable event that Amazon was going to invent AWS and then build that business um is actually the core driver of its market cap and value today it a very like pretty crazy um thought because you know talk to most growth investors most growth investors are trying to very directly understand what will happen to you know the revenue of this company over the next few years how predictable is their growth what's that exactly going to look like but you know the thing that affected their earnings the most was this totally unpredictable event of AWS being invented and so there sort of this like this pretty confusing um property of companies which is that um you know on the one hand investors like think that they're betting on you know the next few years of execution for the company but for the best companies what they're really betting on is in continuous reinvention um I think Nvidia is actually the best modern example of this which is that uh Nvidia GPU company selling gaming and Graphics chips um for for for decades literally decades and um and and like 15 years ago they they noticed that people were starting to use Nvidia gpus to train algorithms train AI algorithms because of the parallel Computing capability and they just started investing a huge amount of of time effort R&D um and their attention towards supporting that use case and required a huge amount of conviction at that point to like conviction in AI to start that investment that early and you know sort of like keep leaning into it so much even long before it was a needle mover on the financials of the business um but today Nvidia is a trillion dollar company almost purely because of AI and so you know if you're an investor in Nvidia stock 10 years ago again it's sort of a very similar thing it's like you know you're you're evaluating the ability of the company to execute on Graphics chips and gaming chips but the thing that actually matters for whether or not you're going to make a ton of money on the investment is whether or not they invent themselves to be an AI company and so I think this is the this is like you know this is this is the core of of markets or the core of companies that a lot of people don't understand is that the um you know the bet the the thing that you're almost almost always actually betting on is the capability to reinvent how do you manifest that culturally within scale you guys obviously scale API Once Upon a Time and scale AI focused on uh mostly autonomous vehicles now it's much broader than that doing stuff around rhf but uh it sounds like this is something you study and think about how do you how do you make sure that exists culturally within the business great questions why I spend a lot of time thinking about and um there's a few things that we do I think there's like certainly a lot more that you can do at all times I think one is that um we create a culture of of um as much as possible pure mer meritocracy and one that leans heavily into uh people who are usually more Junior at the company who have ideas that are good being almost like thrown into the responsibility of having to run with those ideas and and turn them into something big and um this kind of culture of like you know if you have a great idea first of all anyone can have a great idea and if you have a great idea you have like almost full accountability to realizing it making it happen um this kind of of culture you know really is not how most companies operate most companies like everyone can have a good idea and then like some director or some VP steals your idea and then makes that into their career move um uh this this kind of culture is like is pretty unique and and um and we really lean hard into it to make it very clear that like you know um the limit and I've I've talked to a lot of um I always talk to new people joining the company and people have been at the company for a bit to make sure this always true that the true limit to your impact and future at scale is just like you know you know it's Limitless depending on how much you apply yourself how good your ideas are you know how Innovative they are Etc that's one I think another is that um we we try to be very um we try to always put ourselves uh focus on big problems makes sense so um you know I think a lot of times this is you know Amazon's version of this is like focusing on the customer but I think if you have the right sort of fixed point in your in the system which for Amazon is the customer for us is sort of like thinking about the big problems in the industry um then you'll always end up finding you know stumbling upon opportunities that continue to be bigger and bigger and bigger and bigger and by that I mean so you know we were focused on autonomous vehicles for a very long time which is a huge problem you know very a very big complicated interesting problem at a certain point it became pretty clear that love what we talked about with with geopolitics and sort of the the importance of AI to the future of the sort of like um balance of power between countries that we had pretty high conviction that that was going to be the case and we leaned very hard into working with the US government and the US DOD and um and a lot of the technology that we built up in servicing the autonomous fuel industry was pretty applicable but you know we we then took on this much larger problem of how do you ensure American leadership um and how do you ensure that the US stays ahead and that's such a big problem that you know in the course of of ser serving that problem we stumbled upon much much larger opportunities than the original opportunities in autonomous vehicles and the same has been true now with you know the big problem is is um helping to ensure the maximal progress in the AI industry like how do we how do we ensure that that these models are the most impactful version of the C that we push for the maximum amount of progress in the I industry and that's an that's you know the biggest problem of our time so I think pushing ourselves to be continuously ambitious for for what is the what is the North Star of the business I think has been critical I want to ask about interviewing so uh you said your favorite interview question is what's the hardest uh you've ever worked on something why do you like that question yeah so I generally think there's like there really are two kinds of people in the world um there's and and this is like this is a psychological term but um there's having like an internal versus an external locus of control so if you have um in internal locus of control it means that uh you believe the things that happen in your life are actually more product of like what you do and and uh the actions that you take so you believe a lot more in like you know you you're holding the Reign on your own life um and if you have an external control is the opposite you believe that like you know things that happen to are mostly the the outcome of things outside of your control sort of like the world's very deterministic and you're sort of like a a pinball in a big pinball machine and um and I I really like you know if you know how to look for it this really is like a very clear dichotomy between how people how people think about their lives and I find that you know I only want to work for people work with people who have an internal Locust of control and um and one way to like look at that is or one way to like index off of that is seeing how hard do people work at things that matter to them right because you know there's things that matter to everybody you know know everybody's things that matter to them but if uh if they have an internal locus of control then they're going to like work their ass off to make sure that the things that matter to them happen in the best possible way if they have an external locus of control things matter to them but they sort of like you know throw their hands up and and let uh let the world sort of Take the Wheel and so by by seeing how how hard people work on you know the things that matter the most to them and like by really like actually quantifying and getting a sense for how obsessive were they how much do they really care like how small of details do they sweat you get a very pretty clear indication for how how much um how much control they believe they have on on their life outcomes what's the single Trader characteristic that you're most looking for in hiring is it that locus of control or is there something else that stands out yeah I think um you know there's a few there were like we had this early document um that we wrote up around like what do we look for and people that we hire and there were sort of like there were four traits one is internal locus of control um two is problem solvers so um fundamentally people who are like very good at creative problem solving you you give them a problem and they'd figure out like you know sometimes it would you couldn't solve it just by like tackling it head on they figure out like a a way around the way around the roadblock that's a really important trait third we looked for people who are impressive so um uh we look for people who who you know when you talk with them and you you worked with them you were sort of like were genuinely impressed by them and it's kind of a shorthand for people who are just sort of like constantly upping the bar of the organization um which is that like you know if you're impressed by somebody you know you're going to be very motivated by coming to work every day and work with them and learn from them um so so we we held a pretty high bar there and the last one was people were collaborative I think that you you can have people who are like high looks of control um good problem solvers very impressive but just like suck to work with and so those were the those were sort of like the North Stars for the organization that's carried us pretty far you're spoken about how the prestige around big Brands and Tech actually kind of perverts and distorts the perspective around hiring in Silicon Valley and how do you think that's the case like these big brand names people stay at for a long time why is that kind of a contra signal that you'd look not look for you know I think one of my favorite lines around this is like you know if you're if you're recruiting organization looks like a college ad missions office then you know you should be pretty scared something along those lines and um I think it's true which is that like the reality is it's very hard for somebody at a big tech company to have any sort of real impact I'm not you know there is not too much of indictment of the big tech companies but they just hire so many people they have you know a limited scope of problems that really really matter and so a lot of the people they hire just end up working on like a teeny piece of a teeny piece of a teeny piece of a teeny piece piece of a problem um so if you think about the selection bias the people who get selected into these very large brand name tech companies are those who they're almost they're over optimizing for brand and Status relative to impact um by contrast small startups are like are literally the exact opposite like you're joining a small startup because you're like wow I see the five people working on this thing and like I know I can come in and have a big impact not to say they're all doing they're doing a bad job but like I know I can have an impact um but uh it's not going to be a cool thing like I'm not going toble tell my friends about working this startup and they're not going to think like oh wow that's really awesome and so a lot of hiring you know a lot of is like skills based but a lot of it is also just culturally um testing people and you really want these people who don't care about status care a lot about impact um and and I think yeah I think big tech companies negatively select for that we were talking about zero to one before we got going and how it's kind of um been normalized and start startup culture and uh I think once upon a time it was very revolutionary but now I think a lot of the things that they wrote about or Peter wrote about in the book has become kind of status quo in a lot of startups is there something that you've read or um internalized today or recently about startups that's non-consensus that you think will be at some point in the next couple years around how to operate or work with companies I think one thing that certainly non-consensus in in the context of the ecosystem but I think is really is certainly in my experience is that you really um the value of of very hardworking people who are not necessarily super experienced in in in your company and it's like pretty surprising there are kind of like two there are some kinds of companies where it's like you know a small group of very experienced people who build something incredible like that certainly exists but but for the most part I think most startups are are a you know the sort of like chaotic Buzz or Hive of people who are not necessarily super experienced very very hardworking very high aptitud very capable and just sort of um uh almost like gradi in descent to building these incredible things and it's I think that's that's that's not super well understood or super adopted by the entire Tech ecosystem you know a lot of tech ecosystem I think is really focused on hiring um you know the experienced people who who um who've been there and done that I think the other thing we're talking a little bit about this um is the importance of having a strong point of view it's it's quite interesting the last generation of tech Giants you know you think about the the Googles and the metas even the apples of the world um the the startup advice or the sort of like classic business advice is to um have as neutral of a point of view and as neutral a brand as possible um so that you have you know you can distribute your product broadly you're not offending anyone you're not you know you're having as as widescale impact as as wide as broad-based appeal as possible and um and I think we're very quickly entering a diff very different era which is that um the right thing to do is is to have a pretty strong point of view and to be very loud about that point of view because that allows you to a attract the talent of people who agree with you um so it's like it's incredible for building a positive culture and building a a very high Talent Group um it's also very important for your customers because more and more customers whether it's Enterprise customers or consumer customers care a lot about working with people who philosophically agre with them and sort of share their points of view and um and it it forces you to keep your company authentic uh and that's kind of it's like kind of a subtle thing but I think you know uh I look at a lot of peers in Enterprise software and these enterprise software companies just become you know very quickly they stand for nothing and early on every company was is the product of like Founders who care a lot who really like sweat every detail and then invariably every Enterprise company becomes sort of like a you know another widget that in the like bag of tools or whatever and I think it's important for companies to maintain aens identity and maintain authent you know remain authentic to have any chance at the reinvention component that I talked about before I Virg sh scale has never been a particularly cool business uh can you elaborate on that and if that's been a net negative or a net positive for the company over the years totally I think that like you know it's funny we've we've always operated in very cool spaces you know self-driving cars the current AI um Revolution but we've never been the cool people in that in those spaces because fundamentally we're an infrastructure provider you know infrastructure is not that sexy um we actually for our company we don't want the people who just want to be cool and flashy and work on this exciting new technologies um we actually really want the people who are willing to roll up their sleeves get their hand sturdy and work on the unsexy problems in AI that are really really damn important um so I think it's been a um it's been very important for building you know the comp in a way that I think is is true to the work that we need to do um and uh and I think that the the impact has been that we you know the people who join scale know what they're getting into they know what role in the ecosystem we play and they they care a lot about that you have a wonderful office that we're sitting in right now i' I I've heard you say that you believe you actually should spend money on a nice office base and that that's a important thing to do for your employees can you talk to me for a little bit about why that's the case yeah with the the risk of sounding somewhat woo woo I mean I do think that like the um the spaces that you're in Impact a lot about your thinking I mean personally um being in spaces with a lot of natural light is like a one of the best things I can do for the quality of my thinking um and I think there's a lot of fractal effects here where um pretty subtle differences in either the quality of your space or the amount of natural light or the sort of like configuration that you're in with your co-workers can have pretty big impacts to um the ultimate you know end outcome and the and the quality of of thought so um so it is one of these things that I think is sort of like almost Insidious and how much it matters and sort of unintuitive what about uh structuring your day how do you structure your day to for maximum productivity yeah what I what I often find is the best thing to do is to um is to set some pretty clear goals at the very start of the day day to say what are the most important things for me to to get done today um and they can start out pretty small and then over time you'll like find where your limits are and and upsize them um and uh and then you know I'm in a ton of meetings every day um which is part of the job but you know the continually I'll check in how am I progressing against you know the clear goals I set I think that's probably the best um the best thing I would do I've heard you say that both math and physics growing up there were clear right answers but it was violin that was super influential to you because it wasn't just about getting the notes right can can you elaborate on that point or expand on on why and how the violin influenced you I think one of the things that is like somewhat maddening for people who are very um quantitative um in business is that you're sort of like constantly operating in a little bit of a um of a gray area in the sense of like you'll never really know if your decisions were like LA and were like fully correct or incorrect and um and most things that matter are like quite hard to measure and you just have to like operate the Instinct and so I think this sort of like almost like fuzzy thinking or this sort of like more intuition driven um kind of thinking is something not super well trained in in math and science and much more well trained in in um in the Arts in America um so that that's the primary way it's been fortive and I think another thing that's that's been quite important or quite quite valuable as a part of that is also developing a sense of taste I mean I think that like so much of you know the the product of a company uh is an outcome of taste and the the degree to which you take that taste seriously um and uh taste in people taste in Aesthetics taste in product taste in um in in how to organize and so um apple is probably the best example of this one of the most tasteful companies in the world and and I think it's it's been important to me to have been in a field where you have to develop taste to be effective and and apply that to the company your dad's a physicist and Mom's and astrophysicist right yeah uh how did your childhood most influence the CEO and founder that Alexander Wang is today have a great example for this because it was just um I just spent the weekend with my parents and uh to them it was like really important that you know the people they worked with and the people and their leaders had the sort of very deep um uh like almost inexplicable passion for the sort of place and the work that they did and the the sort of like in the history of the field and um and Sim almost Sim pretty similarly like my parents both watched offenheim many many times and they told me that you know we kept we had to keep rewatching it because we had to like um we had to figure out who were all the physicists like which you know there were like physicists in the movie that had like a single line or didn't have any lines and they were like oh we had to like really figure out who played each of the physicists and so I think there's like this this level of like inexplicable passion for the field of physics that both my parents have um and this level of like of sort of like fundamental um care and love of the field that I think really rubbed off on me I mean my mom would teach had been teaching me about physics ever since I was born basically and I think that level of sort of like deep enthusiasm has been quite infective you wrote a blog post hire people to give a that uh I think ties into that and some of the hiring things that we were speaking about earlier is there anything else that you would say about how you try to assess out if someone uniquely has the passion for your company versus any other business when you're recruiting and in the hiring process you know one thing we do we often ask people like why they're why they're ining at scale and I think um you can tell you can tell a good answer by how obscure it is um you know if people are just like oh AI is the next big thing and I want to work in an AI company they're like ah okay um but if they're like yeah you know I I like um I was I was uh working with one of my friends to train a model and like spent five hours just looking at the data and like there was one little bug in the data that caused the whole model to not work and I realized that um and then I realized that this problem was like really deeply interesting and then I applied to scale because of that like that's like the right kind of answer so um so one of the things that you look for and Paul Graham I think has written very elegantly about this topic is is sort of a a reason for people to care about things when there was when there's like like an irrational reason to care about about things whether it's because of some curiosity or some sort of like Quirk or something like fundamentally irrational some like reason they care about what we do and I think that's probably the thing we look for the most is like something that not you know uh that's fundamentally irrational and fundamentally sort of like hard to explain about about their passions similar to music right uh practicing music maybe people will never know if you cut that last corner but if you know it if you really practice it then it could something that's aate to you totally I've heard you say maybe you tweeted or something you've been weird your whole life and that everybody you've ever respected has also been weird why do you think being weird is a important uh trait to being an interesting person and the types of people you resonate with yeah I mean like purely statistically if you're like if you're normal that means you're like in the bell curve and uh and you know it's it's hard to be in the bell curve and accomplish you know great things or to have a huge amount of like differentiated impact on the world so I think it's like a pure statistical argument but I think that the thing that I found I find the most like interesting here is like um you know being normal is kind of like a it's kind of like some approximation for having like generally speaking like pretty mainstream beliefs um and uh and there's nothing wrong with that but that it means that like this is maybe an indictment but like if you're normal it's pretty easy to simulate a conversation with you um and it means there's like you know in some ways like low information content from from having that conversation whereas if you're weird and you say a lot of very unexpected things and have a lot of unexpected thoughts that's very generative experience um so I think interacting with people surrounding yourself with weird people ends up being quite valuable because you just sort of like you get to like bathe in a more entropic and more um sort of like fundamentally uh interesting and diverse pool of ideas and thoughts I that's like the that's the greatest gift that you know you could have what has you most excited about the future of AI as we look out five 10 years from now it's hard to not be excited about you know kind of what I talked about which is potentially the greatest economic invention and the greatest economic engine that um Humanity will have ever invented so that's kind of um it's hard to again it's like that's like fundamentally so incredibly exciting it's like it's like as if we're inventing the steam engine times a million right um so what is this thing that will generate so much economic surplus that lifts so many people into better living conditions that kind of like elevates Humanity to such an insane degree it's such an an exciting um proposition then double clicking in that the deeply um exciting components there are again the sort of the elevation of The Human Condition right so take healthare kind of alluded to before um right now globally speaking there's roughly a 10x shortage of of doctors so because it takes so much training and and um it's so expensive to train people and take so much time and resources um there's there's really at a Global Perspective just like way too few doctors and even with those doctors the way Healthcare mostly works right now is extremely reactive like you go to the doctor um you will uh you know you go to the doctor you have you have a problem you go to the doctor and sometimes they can fix it easily sometimes it's extremely expensive to fix most of the times it's very expensive to to resolve and then sometimes it doesn't work out you know fundamentally we a more proactive Healthcare System by which you're constantly measuring a lot of things and you know you can deal with these problems very early and um and Healthcare is just is just an entire field that like without technology breakthroughs we're kind of stuck as a as a species you know human is like a little bit stuck in the in like how good you can make Healthcare without real uh fundamental technological advances so if AI all of a sudden can give everybody a doctor in their pocket that you know enables them to as soon as they feel something weird or they think something weird is going on or like there's a you know there's a weird bump or whatever they can they can be proactive about that it's pretty incredible um that's just one way in which like you know that could have one of the greatest effects to longevity of anything that we do um you know Global uh Global lifespan so those are the things that get me really excited is like the the full knock on impacts are going to be pretty great Alex thanks for doing this yeah thanks much for having [Music] [Applause] me
Info
Channel: The Logan Bartlett Show
Views: 353,794
Rating: undefined out of 5
Keywords:
Id: gDMemWgEJak
Channel Id: undefined
Length: 91min 53sec (5513 seconds)
Published: Fri Nov 03 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.