Safety in Numbers: Keeping AI Open

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
scaling laws now these underpin the success of large language models today but the relationship between data sets compute and the number of parameters was not always clear in fact in 2022 a pivotal paper came out that changed the way that many people in the research Community thought about this very calculus and it demonstrated that data sets were actually more important than just the sheer size of the model one of the key authors of this paper was Arthur MCH who was working at Deep Mind at time now earlier this year Arthur banded together with two other researchers yam Lampo and timate laqua two researchers at meta who worked on llama and together the three of them founded a new company mrol that team has been hard at work releasing mrol 7B in September a state-of-the-art open source model that quickly became the go-to for developers and they just released as in in the last few days a new mixture of experts model that naturally they're calling mixt so today you'll get to hear directly from Arthur as he sits down with a16z General partner Anan maida as the Battleground for large language models heats up to say the least together they discussed the many misconceptions around open source and the war being waged on the industry plus the current performance reality of open versus closed models and whether that Gap will realistically close with time plus the kind of compute data and algorithmic Innovations required to keep scaling llms efficiently now it's really rare to have someone at the frontier of this kind of research be so candid about what they're building and why so I hope that you come out of this episode as excited about the future of Open Source as I did enjoy as a reminder the content here is for informational purposes only should not be taken as legal business tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any a6c fund please note that a16z and its Affiliates may also maintain investments in the companies discussed in this podcast for more details including a link to our investments please see az.com disclosures you've got uh quite the founding team story you know we flashback to a few years ago labs are building Foundation models and the consensus across the research Community was that the size of these models was What mattered most you know how many million billion parameters went into the model seemed to be the primary debate that people are having but it seems like you had a hunch that data sets mattered more could you just give us the backstory on the chinchilla paper you co-wrote you know what were the key takeaways on the paper and how was it received yeah so I guess the backstory is that uh 2019 2020 people were relying a lot on um on a paper called scaling lows for large language models that was advocating for uh basically scaling infinitely the size of models and keeping so number of data points uh rather fixed so do saying that if you had like four times the amount of compute you should be mostly multiplying by 3.5 your model size and then maybe by 1.2 uh and so a lot of work was actually done on top of that so in particular at Deep mine when I joined uh I I joined a project called gopher and and that's a there was a misconception there there was also a misconception on gpt3 and basically in 2021 every paper uh made this mistake and at the end of 2021 we started to realize there were some issues and as it turns out we turned back to to the mathematical paper that was actually talking about scaling lows and it was a bit hard to understand and we figured out that actually if you thought about it bit more in a theoretical perspective and if you looked at like empirical evidence we had um it didn't really make sense uh to actually grow the model size faster than the data size and we did some measurement and as it turned out what was actually true was actually what we expect which is in common words if you multiply by four uh your compute capacity you should multiply by two the model size and by two the data size that's approximately what you should be doing which is good because if you move everything to Infinity everything remains consistent so you don't have a model which is infinity big or a model which is infinity small with infinite compression or like close to zero compression so it really makes sense and as it sounds out it's really what You observe if you look at uh if you do multiple runs and and so that's how we we train chinchila and and that's how we wrote the chinchila paper at the time you know you were at Deep mind and your co-founders were at meta what's the backstory around how you three end up coming together to form mrol after the compute optimal skating laws work that you just described so we've known each other for a while because Gom and I were in school together and timoth and I were in master together in Paris basically we had like very parallel careers Timo and I we actually work together as well again when I was doing a post talk um in mathematics uh and then I joined deep mine as Gan timot uh went to become permanent researchers at meta uh and so we continued doing this I was doing large language models in between 2020 and 2023 Gom and timote were working on um solving U mathematical uh problems with large langage models and if I understand correctly I wasn't there but they realized they had to have stronger models and they started to do large language models at this point uh so like I guess a year after I started uh and on my side I was mostly working on a small team at uh Deep Mind so we did very interesting work on uh we retro which is a paper doing retrieval for large language models we did the chinchila then uh we I I was in the team doing Flamingo which is actually one one of the good way of doing a model that can see things I guess when chbt went out we knew I mean we knew from before that the technology was very very much gamechanging but it was kind of a signal that there was a strong opportunity for building a small team uh focusing uh on a different way of Distributing the technology so redoing things in a more open source manner uh which was not the direction what that Google at least was taking and and so we had this opportunity then we left the company uh at the beginning of last year and and created the team that started to work on the 5th of June and if I recall correctly right before they left Tim and guom had started to work on llama right over at meta could you just describe that project and how it was related to the chinchilla scaling laws work You' done so Lama was uh like a small team reproduction of chinchila at least in the in its approach of the parameterization and all of these things uh it was uh one of the first papers that established that you could go beyond the chinchila scaling lows so chinchila scaling lows tell you what you should be training if you want to have an optimal model uh for a certain compute cost at training time but if you take into account the fact that your model should also be efficient at inference time you probably want to go far beyond the chinula scaling low so it means you want to overtrain the model so train on more tokens than would be optimal for performance but the reason why you do that is that you actually compress models more and then when you do inference you end up having a model which is much more efficient uh for a certain performance so by spending more time during training you spend less time during inference and so you save cost and I think uh that was something we well I guess we observed that at Google also but uh the Lama paper was the first to establish it in the open and it opened a lot of opportunities yep I remember you know the both the impact of the chinchilla scaling laws work on the labs on on multiple Labs realizing just how unoptimal the the compute setups were right um and then the impact of llama being dramatic on the industry and realizing how to be much more efficient about inference time so I can imagine that those were some of the the top insights on your mind and the top concerns on your mind when you guys left uh to start mistol so let's fast forward to to today you know it's December 2023 we'll get to the role of Open Source in a bit but let's just level set on what you've built so far you know a couple months ago You released mistol 7B which was a best-in-class model um and this week you're releasing a new mixture of experts model so just tell us a little bit more about mixol I believe is what you're calling it and how it compares to other models yeah so mix is our new model that wasn't released in Open Source before in a in a usable form uh so it's a technology called sparse mixture of experts uh which is quite simple you take all of the dense layers of your Transformer and you duplicate them you call these layers expert layers and then what you do is that for each exp for each token that you have in your sequence uh you have a router mechanism just a very simple Network that decides which expert should be looking at which token and so you send all of the tokens to their experts and then you apply the experts and you get back the the the output and you combine them and then you go forward in the network you have eight experts per layer and you execute only two of them so what it means at the end of the day is that you have a lot of parameters on your model you have uh 46 billion parameters but the thing is that the the number of parameters that ex that you execute uh is much lower than that because you only execute two branches out of eight and so at the end of the day you only execute 12 billion parameters per token and this is what counts for latency and throughput and for performance so you have a model which has the performance of a 12 billion parameter Network that can uh that have performance that are much higher than what you could get even by compressing data a lot on a 12 billion dense Transformer so Spar mixure of experts is a technology uh that allows to be much more efficient at inference time and also much more efficient at training time so that's the reason why we mates to develop it very quickly just for folks who are listening um who might not be with sort of state-of-the-art architecture and language models could you just describe the difference between you know dense models which have been the primary architecture today and mixture of experts intuitively what are the biggest differences between these two architectures so they are very similar except on the what we call the D Network so the the you know in in the dense Transformer you have you alternate between an attention layer and and a dense layer generally that's that's the idea a spar mixture of experts you you take the dense layer and you duplicate it several times and so that's where you actually increase the number of parameters so you increase the capacity of the model without increasing the cost so that's a way of decoupling the memorization and what you can remember the capacity of the network to its cost at inference time if you had to describe the biggest benefits for developers as a result of that inference efficiency it's cost and and latency so you can have uh usually that's what you look at when you're a developer you want something which is cheap and you want something which is fast generally speaking the just the trade-off uh is strictly favorable in using mix compared to using a 12 billion dense model and the other way to think about it is that if uh you want to use a model which is as good as L 270b you should be using Mixr because m is actually on par with Lama 270b while being approximately six times cheaper or six times faster for the same price could you talk just a little bit about why it's been so challenging for folks to uh for research labs and research teams to really get the mixture of experts model right it sounds like you know for a while now folks have known that the dense model architecture that all of us have been using in in sort of the most notable products or the most well-known products um are slow uh they're expensive and they're difficult to scale and so for a while people have been looking for an alternative architecture that could be like you were saying cheaper could be faster could be more efficient what were some of the biggest challenges you have to figure out to get thee model right well I guess I won't disclose all Trade Secrets but uh there's basically two challenges the first one is you need to figure out how to train it correctly from from a mathematical perspective the other challenges is to train efficiently so how to use actually a hardware uh as efficiently as possible you have like new challenges coming from the fact that you have tokens flying around from one to one expert to another uh that creates some communication constraints and you need to figure out um uh you need to make it fast and then on top of that you also have new constraints that apply when you deploy the model uh you do need to do inferencing uh efficiently and that's also the reason why we released an open source package based on VM so that the community can can take also this code and modify it and see how that works yeah obviously we're excited to see what the community does with thee uh mixol release you're putting out this week let's talk about open source and approach and a philosophy that's that's permeated all the work you've been doing so far um why choose to tackle this increasingly competitive space with an open source approach which has been which is quite different from the way everybody else is approaching it I guess it's a good question the the answer is that it's partly ideological and and partly pragmatical um we have grown with the field of AI uh that when from 2012 we were detecting cat and go cats and dogs and in 2022 we were actually generating text that looked humanik so really made a lot of progress and if you look at the reason why we made all of this progress uh well most of it is explainable by the free flow of information so you had academic Labs you had very big uh industry backed Labs communicating all the time about the results and building on top of uh each other results and that's the way we went from we increased significantly uh the architecture and training uh techniques uh we we just made everything work as a community and all of a sudden in 2020 with gpt3 this tide kind of reversed and uh companies started to be more opaque about what they were doing because they they they realized there was actually a very big market and all of a sudden 2022 on the important aspects of AI and on llms we had we went and Beyond chinchila there were basically no communication at all and that's something that that I as a researcher and and timot and Gom and all of the people that joined us as well deeply regretted uh because we think that we're definitely not at the end of the story we need to invent new things there's no reason why to stop now uh because the technology is effectively uh good but not working completely well enough and so we believe that it's still the case that we should be communicating a lot about uh models we should be allowing the community to take the models and make it their own and that's that's a some ideological reason why we went into that the other reason is that we are talking to developers uh developers want to modify things and and having a deep access to to to very good model is is a good way of engaging with this community and uh well and I guess addressing their needs so that the platform we're building as well is going to to be used by them so that's that's also like a a business reason obviously uh as a business we we do need to have a valid Mone ization Approach at some point uh but we've seen many businesses build open core approaches uh and have a very strong open source community and also a very good offer of services and that's what we want to build that resonates I I remember a very detectable shift you're right you know the early days of of deep learning were largely driven by a bunch of open collaboration between researchers from different Labs who would often publish all their work and share them at conferences you know Transformers ly was published and and opened to the entire research Community um but that has has definitely changed yes so I think there's some level of open sourcing uh in in Ai and so we offer the open uh the weights and we offer the inference code that's like the end product that is already super usable so uh it's already a very big step forward compared to closed apis because you can modify it and you can look at what's happening under the hood look at activations and all so you have inter interpretability and the possibility of modifying the model to adapt it to some editorial tone to adapt it to proprietary data to adapt it to some Specific Instructions which is something that is actually much harder to make if you only have access to a close Source API um and that's something that also goes with our approach of the technology which is to say pre-train model should be neutral and we should Empower our customers to take these models and just put their editorial approaches there are instruction they Constitution if you want to talk like entropic into the model so that's the that's the way we approach the technology we don't want to pour our own biases into the into the pre-train model on the other hand we want to enable the developers to control exactly how the model behaves uh and what kind of biases it has what what kind of biases it doesn't have so we we really take this modular approach and that goes very well with the fact that we release uh some very strong open we models could you just ground Us in the reality of where these models are today just to give people a sense of of where in the timeline we are is open source really a viable competitor to proprietary close models or is there a performance Gap you know what are the trade-offs or limitations that people should be aware of uh with open source so mix is as similar performance to GPT 3.5 so that's the that's a good grounding internally we have SW models that are in between 3.5 and four that are basically second or third the second or third best model in the world so really we think that the Gap is closing uh the Gap is approximately six months at that point and the reason why it's it's six months is that it actually goes faster if you do open source things because you get the community uh modify the model toest very good ideas that can then be Consolidated by us for instance and and we just go faster because of that so it has always been the case that open source at the end well ends up being going faster and that's the reason why uh the entire inet rends on Linux I don't see why it would be any different for AI uh obviously there's some constraint that are slightly different because the infrastructure cost is quite High uh to train a model it cost a lot of money but uh but I really think that we'll converge to a setting where you have propri ey models and the open source model are just as good and I think eventually the field will be much more open because if you want to go beyond the the biggest model today you do need to find new paradigms and so that means that we also need to do research H and that's we're very excited by this perspective because we like competitive environment and research yeah so let's talk about that a little bit more how are you seeing people use and innovate on the open source models and are there any use cases that diverge from proprietary close models at all I think we've seen several categories of usage um you had there's a few companies that know how to strongly find you models to their needs so they took mistal 7B had a lot of human annotations had a lot of proprietary data just modify mral 7B so that it solve their task just as as well as gbt 3.5 but only for a lower cost and a higher level of control we've also seen I think very interesting Community efforts in adding capabilities to mral 7B so we saw like a context length extension to 128k that worked very well again was done in the open so like the recipe was available and this is something that we were able to consolidate we've seen imag en coders to make it a visual visual language Model A very actionable thing that we saw is uh I think the hugging face folks first did the direct preference optimization on top of M 7B and made a very strong much stronger model than the instructed model we proposed at the early release and it turned out it's actually a very good idea to do it and so that's something that we've Consolidated as well uh so generally speaking uh just the community is super eager to just take the model and add new capabilities put it on the laptop put it on on an iPhone I saw m 7B on an iPhone I saw Mr 7B on the stuffed parot as well so fun things useful things uh but generally speaking it's been super exciting to see the research Community take a hold of of our technology and with mix which is a new architecture I think we're are also going to see much more interesting things because on the interpretability field also on the safety field as it turns out you have a lot of things to do when you have deep access to an open model and so we're really eager to to help that uh and to engage with the community safety you know this an important I think piece to talk about the immediate reaction of a lot of folks is to deem open source less safe than closed models how would you respond to that so I think we believe that it's actually not the case for the current generation of model uh models that we are using today are not that much are not much more than just a compression of whatever is available on the internet so it does make access to knowledge more food uh but this this is the story of humanity making knowledge uh access more fre it so it's no different than uh inventing the printing machine where we had apparently similar debate it wasn't there but that was the debate we had uh so we are not making the world any time uh any less safer by providing more interactive access to knowledge so that's the first thing now the other thing is that you do have immediate risk of misusage of large language models and uh you do have them for open source models but also for closed models and so the the way you do address these problems and come up with counter measures is to know about them uh so you need to know about uh about bridges basically and that's the same way in which you need to know about bridges on on operating systems and on uh on networks and so it's no no different for for AI uh putting uh models under the highest level of scrutiny is a way of knowing how they can be misused and it's a way of coming up with conter measures and I think a good example of that is that it's actually super easy to exploit an API uh it's super easy especially if you have fine tuning access to make gp4 behave uh in a very bad way and it's um since it's the case and it's always going to be the case it's super hard to be adversarially robust it means that we're only trusting the team uh of large companies to figure out ways of addressing these problems whereas if you do open sourcing you trust the community and the community is much larger um and so if you look at the history of software in cyber security in operating systems that's the way we made the system safe and so if we want to make the current AI system safe and then move on to a Next Generation that potentially will be even stronger and then we can re have this we can have this discussion again well you do need to do open sourcing so today we think that open sourcing is the safe way yeah I think this is this is not understood right widely that um when you have thousands or hundreds of thousands of people able to Red Team models because it's open source the likelihood that you'll detect biases and um built-in breaches and risks are just dramatically higher um and I think if you were talking to policymakers how would you help advise them how do you think they should be thinking about regulating open source models given that you know the safest way often to battle Harden software and tools is to put them out in the open well we've been saying that precisely this that um that the current technology is not dangerous on the other end the fact that we we are effectively making them stronger means that we need to monitor what's happening empirically monitor performances the best way of empirically monitoring software performances is is through open source so that's what we've been saying um there's been some effort to try to come up with very complex governance structure where where uh you would have like several companies talking together having some safe space some safe soundbox uh for uh red teer that would be potentially independent so things that are super complex uh but as it turns out if you look at the history of software the only way we did software collaboratively is through open source so why change the recipe today where uh the the technology we're looking at is actually nothing else than the compression of the internet so that's the that's what we've been saying to The Regulators uh J another thing we've added to the regulator is that if they want to enforce that AI products that needs to be safe like like if you if you want to have a diagnosis assistant you want it to be safe right well in order to Monitor and to evaluate whether it's actually safe you need to have some very good tooling and the the tooling requires to have access to llms and if you access close Source apis llms where you're bit in a in a in trouble water because it's hard to be independent in that setting so we think that independent controller of product safety should have access to very strong open source models and should own the technology and if open source llms were to fail relative to close Source models why would that be well I guess the the regulation burden is is uh potentially one thing that that could uh make it harder to uh to release open source models it's also generally speaking it's a very competitive market and I think in order for open source models to be widely adopted they need to be as strong as open as a close Source model they have a little Advantage because you do have more control and so you can do heavier fing and so you can make performance jump a lot on a specific task because you have deep access uh but really at the end of the day um developers look at performance and and latency and so that's why why we think that as a company we need to be very much on the frontier if we want to be relevant given the complexity of uh Frontier models and Foundation models in these systems there are just tons of misconceptions that folks have about these models and so if you step back and we look at the Battle that's raging between folks um uh pushing for closed Source systems and versus the open source system what do you think is at stake here what do you think the battle is for well I think the battle is for the neutrality of the technology like a technology by ense is something neutral you can use it for bad purposes you can use it for good purposes if you look at what the llm does it's not really different from programming language it's actually used very much as a programming language by the application makers there's a strong um confusion made between what we call a model and what we call an application and so a model is really the programming language of a application so if you talk to all of the startups doing amazing products with generative AI they're using llms just as um as a function and on top of that you have a very big systems with filters with decision making with control flow and all of this things and what you want to regulate if you want to regulate something is the system the system is it's the product so for instance um a healthcare diagnosis assistant is is an application you want it to be non-biased you want it to take good decisions uh even under high pressure so you want its statistical accuracy to be very high and so you want to measure that and it doesn't matter if it uses a large language Mo uh under the hood what you want to regulate the application and the issue we had and the issue we're still having now is we hear a lot of people saying we should regulate the tech so we should regulate the function the mathematics behind it but really you never use a large language model itself you only always use it in an application in a in a in a way with a user interface and so that's the one thing you want to regulate and what it means is that companies like us like foundational model companies will obviously make the model as controllable as possible so that the applications on top of it can be compliant can be safe uh we'll also build the tools that allow to measure the compliance and the safety of the application because that's super useful for the application makers it's actually needed but there's no point in regulating something that is neutral in itself that is just a mathematical tool so I think that's the one thing that we've been hammering a lot uh I think we've been which is good uh but there's still a lot of effort uh in uh I guess in making this strong distinction which is super important to understand what's going on so to regulate apps not math seems like you know the right direction that a lot of folks who are who understand the inner workings of these models and how they're actually implemented in in reality um are advocating for what do you think um is the best way to clear up the this misconception for folks who don't maybe don't have technical backgrounds don't actually understand how Foundation models work and how the scaling laws work so I've been using a lot of metaphors I guess to to to make it understood large language models are like programming languages uh and so you don't regulate uh programming languages you regulate malwares you you ban malwares we've also been actively uh vocal about the fact that pre-market conditions like flops the number of of flops that you do to create a model is definitely not the right way of doing uh of measuring the performance of a model right we um we're very much in favor of having very strong evaluations that's as I've said uh this this is something that we want to provide to our customers the ability to evaluate our models in their application uh and so I think this is a very strong thing um to well that we've been stressing we want to provide the tools for application makers to be compliant that's the that's something we have we've been saying and so we find it a bit unfortunate that uh we haven't we haven't been heard everywhere and that there's still a big focus on the tech uh probably because things are not completely well understood because it's a very complex field and it's also a very fast moving field uh but eventually I think I'm I'm very optimistic that we'll find a way to uh continue innovating uh while having safe products but also uh high level of competition on on the foundational mod uh layer well let's let's Channel your optimism a little bit you know there's there's very few people who have the ground level understanding of scaling laws um like you gam and Tim and your team when you step back and you look at the entire space of language modeling in addition to open source what are the key differentiators that you see in the next wave of Cutting Edge models um things like you know uh self-play you have process reward models um the uses of synthetic data uh if you had to conjecture what do you think some of the most exciting or important breakthroughs will be in the field going forward so I guess it's good to start with diagnosis so what is uh what is not working that well so reasoning is not working that well and it's super inefficient to train a model uh if you compare like the training process of of a large language mobel to the brain you have like a factor I think 100,000 so really there's some progress to be made in term of data efficiency so I think the the frontier is increasing data efficiency increasing reasoning capabilities so adaptive comput is one way uh and when to increase data efficiency you do you need to work on coming up with very high quality data filtering things uh many new techniques that needs to be invented still but that's really where the lock is uh data is the one important thing and the ability of the model to decide how much computed want to allocate to certain problem uh is definitely on the frontier as well so these are things that we're actively looking at you know this is a raging debate right we've and we've talked about this a few times before which is um Can models actually reason today do they actually generalize out of distribution what's your take on it and what do you think is required to to exhibit what what would convince you that models are actually capable of multi-step complex reasoning yeah it's very hard because you train on the entire human knowledge and so you have a lot of reasoning places so it's uh it's hard to say whether they reason or not or whether they do retrieval of reasoning and it looks like reasoning right uh I guess at the end of the day what matters is whether it works or not and on many simple reasoning task it does so we can call it reasoning it doesn't really matter if they reason like we do we don't even know how we reason so right so we are not going to know about how machines reason anytime soon yeah um so yeah it's a it's a raging debate uh the reason the the way you do evaluate that is to try to be as out of distribution as possible uh like working on on mathematics uh is not something I've ever done but that something that Timo and Gom have are very sensitive to because they've been doing it for a while uh when they were at meta that's probably one way of measuring whether you have a very good model or not and actually if you look at um we're starting to see some very good mathematicians uh I'm thinking of Teran St right that are using large language models for some things uh obviously not the high level reasoning but for some part of their proofs and so I think we will move up in the abstraction uh and the question where does that stop we do need to find new new paradigms to uh to go one step forward and and we we will be actively looking for them we've talked a lot about developers so far if you had to sort of Channel your product View and and sort of just conjecture on what these advances in scaling laws in in in representation learning in get teaching the models to to reason faster better cheaper what will these advances mean for end users in terms of how they consume how they program and they generally work with models what we think is that uh fast forward five years uh everybody will be using their specialized uh models Within part of complex applications and systems developers will be very um looking at latency so they will want to have for any specific task of the system they will want to have the lowest cost and lowest latency and the way you make that happen is that uh you will ask for the task ask for user preferences ask for what you want the model to do and you try to make the M as small as possible and as suitable to the task as possible and so I think that's the way we'll be evolving on the developer space I also think that U generally speaking the fact that we have access to large language models is going to reform completely the way we interact with machines and the internet of five years five years from now is going to be much different so much more interactive uh because I think this is already unlocked I mean it's just about making very good applications with very fast systems uh with very fast models so yeah very exciting times ahead so what would those interaction modalities look like yeah so that's very interesting and I think in in games for instance it's going to be fascinating uh we've seen some very good applications you do need to have small models because you want to have swarms of it and it start to be bit costly if you if it's too big uh but having them interact is just going to make pretty complex systems and interesting systems to observe and to use uh so uh so we have a few friends making applications in the Enterprise space space with different Persona playing different roles relying on the same language model but with different prompts and different uh functioning um and I think that's going to be quite interesting as well to uh to look at as I've said complex applications in in in three years time are just going to use different parts different llms for different parts and that's going to be quite exciting well what's your call action to builders researchers folks who are excited about the space what would you ask them to do I I would take uh mistal models and try to build amazing applications uh the way many developers had uh it's not that hard uh it's the stack is starting to be pretty clear pretty efficient uh you only need a couple of gpus you canot even do it on your MacBook Pro if you want it's going to to be a bit hot but uh uh but it's good enough to to do interesting applications uh really the way we do software today is very different from the way we did it from last year and so I'm really calling application makers to action because we we are going to to try to enable them to to build as fast as possible thank you so much for listening to the a6c podcast what we're trying to do here is provide an informed cleared but also optimistic view of technology and its future and we're trying to do that by featuring some of the most inspiring people and the things they're building and so if you believe in that and you'd like to join us on this journey make sure to click subscribe but also let us know in the comments below what you'd like to see us cover next thank you so much for listening and we will see you next [Music] time
Info
Channel: a16z
Views: 6,482
Rating: undefined out of 5
Keywords:
Id: NhASk7rZsmU
Channel Id: undefined
Length: 39min 3sec (2343 seconds)
Published: Thu Dec 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.