Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
today on this week in startups Jason is joined by Stephen Wolfram of Wolfram research the two have an incredible conversation about AI including wolf from launching one of the first chat GPT plugins the history of neural Nets how exactly chat GPT Works how this technology is going to shape jobs in the future and so much more stick with us this week in startups is brought to you by cast AI if you run software in the cloud and it's been a significant cost driver listen up cast AI automates Cloud cost reduction with clients saving an average of over 60 percent twist listeners can get a cloud cost audit with a personal consultation free of charge visit cast.ai twist to get started vanta compliance and security shouldn't be a deal breaker for startups to win new business vanta makes it easy for companies to get a sock to report fast twist listeners can get one thousand dollars off for a limited time at vanta.com twist and clumio when you're building a company don't let backups and compliance requirements distract you let the data protection experts at clumio help with immutable air gap backups that put compliance on autopilot visit them at clumio.com twist to start a free backup or sign up for a demo all right I'm really excited for our next guest today Stephen Wolfram is here it's a founder and CEO from research you might have used uh well from alfra which I guess some people call it search engine but it's obviously much more than that uh and he's a prolific author I really don't need to introduce them all that much um I guess a great place to start would be maybe to talk about what we've seen with chat jpt and how impressive is it to you uh watching three 3.5 and 4 come out over the past year and then we'll get into the plugins and how well for an alpha is sort of plugging into it well you know I I've been paying attention to neural Nets since about 1980 that was when I I first programmed up a neural that didn't do anything terribly interesting so it's uh you know and then 2012 comes around and deep learning neuron that start doing interesting things we started putting them into with language and so on and uh I've been sort of tracking large language models for a while and they didn't seem that exciting and then chat gbt came out and suddenly it was exciting and it was able to do really useful things and I think we still don't completely understand what allowed that jump to occur um but I think we we kind of get some idea now now that that jump has occurred we can go back and look at you know why does this work what's what's really happening and so on yeah so for a lay person when you type a question into chat GPT or I guess Google's Bard is out we see Poe from Cora so many different language models are being released um what is actually happening under the hood when we ask it hey um I have some salmon and uh how should I prepare it what are my options what is it doing actually behind the scenes well I mean it's doing something incredibly mundane that's very surprising that it can be as human-like in its output as it actually is because you know in the end what it's doing is it's saying you've typed some text I'm going to continue that text the way that the statistics of text on the web and other places that it's been trained from works so it's kind of like if you if you're just doing with letters if you've typed a queue then you know there's an overwhelming promo so that you comes next in English at least and it's got a much more elaborate version of that and the thing that uh you know you might think well you just count you know if you've if you've got some some phrase you just say how many times does that occur on the web and what's the typical next word when that occurs on the web that in and of itself doesn't work because there just isn't enough text on the web there might be a trillion words that you could find uh you know between the web and books and things but that's not enough to be able to give you sort of uh statistics on what's the next word after you know the best thing about AI is or something there aren't enough occurrences of that that you can sort of statistically work it out so you have to have a model and the thing which is interesting surprising is that this particular model that's the idea of a neural net turns out to give you results that are very human-like that you know when it has to work out sort of how will it extrapolate from just the pure statistics of what's on the web it extrapolates in a way that's somehow similar to the way humans do it and you know I think that in the end that's because you know that's actually work very much the same way as the sort of wiring in our brains works and you know the the history of this is you know back in the 1940s people knew that you know brains had neurons and there were you know what we now know there are about 100 billion neurons in our brains and they've all got you know these little electrical devices basically where each one is connected to maybe a thousand ten thousand whatever other ones it's a big complicated mass of sort of wiring neural wiring and that was what people uh started doing was thinking about well what's the kind of formal representation of that what's the kind of mathematical way to represent that that was invented in 1943 and uh at that time and in the 1950s and 1960s people were like well what does it do if you have five neurons if you have ten neurons you know if you have you know 30 connections between neurons and so on and didn't do anything it did a few things that were somewhat interesting we didn't do anything terribly exciting turns out when you have 100 billion neurons 100 billion connections between neurons a few million neurons that turns out that you can capture a lot more of what actual brains do um and it wasn't obvious what that number would be it wasn't obvious how big the uh you know how much data you would have to train with how big the number of neurons would have to be to get sort of human-like Behavior I mean the thing that is the other other critical point is there is enough text now available on the web that you can kind of figure out the statistics you can train the neural Nets kind of well enough from that text that it can produce things which are a good match to what would be sort of the human-like way to continue that sentence so to speak that's kind of it is sort of remarkable that these systems are basically just writing one word roughly at a time and yet just by the way the sort of Statistics works out the the whole the whole essay or whatever ends up being coherent and so three things had to come together one the Corpus that I was trained on and who knew it but that wound up being the world wide web and all of these different you know data sets could have been Reddit Wikipedia the obvious one so the data sets had to grow to a certain size then we had to have enough compute and enough storage to process it fast enough and then the language model had to be written or built by somebody those were those are the three components that have been actually to make this happen you know the language model there are some clever ideas but actually between 1940 and now there were a lot more you know very clever ideas that didn't work out and what we actually have now the structure of the neural Nets with a few extra pieces that are kind of important but but they are kind of they seem minor relative to the things that were tried in the intervening years the neural is really close to what people imagine neural Nets would be like back in the 1940s kind of worked yeah explain to a Layman what is a neural net and how this comes up with these connections um and it is pretty amazing that it is exactly what we thought it was and we just had to wait for compute and Corpus of data to trading data to kind of reach critical mass I guess so okay so what is an illness so so in brains and in neural Nets there are neurons and neurons have this feature that well I'll explain the case for Brains it's rather similar for artificial neural Nets when a neuron has all these so-called dendrites all these incoming connections that are just you know pieces of the nerve cell so to speak and a nerve cell is basically an electrical device and when the nerve cell kind of fires it produces an electrical pulse which it sends out to its outgoing wires so to speak neural wires so to speak so what's happening is the when roughly when when there are kind of um when in the first approximation in the original way this was set up when there kind of are enough incoming wires that have signals on them then the neuron says okay I'm going to fire and then I it produces a signal that gets sent out to the sort of next neurons that are connected to it now the thing this idea of Weights which is a big thing that people talk about in neural Nets that has to do with the fact that if the incoming the sort of the if there are incoming signals on all these various wires it's not just there's a signal and every signal is treated the same these each of these incoming wires has a certain weight it might be a positive number might be a negative number you know it's like a weight of 0.72 a weight of minus 0.34 and and roughly all those different weights get the when there's a signal you multiply by the weight you add all those things up then there's kind of a thresholding function and um and that determines whether the neuron fires and sends data on to the next neurons down down the line so that's that's how it seems to work in brains and that's pretty much how it works in artificial neural Nets listen if you run software on AWS gcp or Azure you know how crazy the bills can get the pricing and uncertainty can make you really anxious right you get that sticker shock but there is a way to lower your bills and the best way to do that is cast AI they audit and optimize your Cloud cost and your performance major Cloud providers don't do this why would they want your bill to be high they don't want you looking at the bill they want you paying the bill cast AI wants to discover what could be reduced in your Cloud mail right so they're on your side and they're going to eliminate the stuff that you pay for but you don't use that happens all the times people spool stuff up they forget to turn off and they also search for less expensive hosting options within your cloud provider right and then you start saving immediately it's on average cast AI customers save over 60 on their Cloud spend so if you do this right now you could be spending 60 less and just think of what that will do over the next 30 months and how much runway you're going to add to your startup that's all capital going back into your business right that's going to your team members it's going to your marketing it's going to acquiring customers and that's going to get you closer to your next round of funding every dollar counts in this funding environment you know that so before you go and sign any multi-year Cloud contracts or make any Jurassic personal decisions just stop for a moment and check out what cast AI can do for you they're going to give you a personal free Cloud cost audit and you get a personal consultation it's free so why wouldn't you take it cass.ai twist to get started visit cast.ai twist and get your free Cloud cost audit today okay so first question is you've got this prompt you wrote out the prompt you're saying you know the best thing about AI is or something that has to turn into a bunch of numbers that represent kind of the the intensities of firing of these different of a collection of neurons and so there are uh there's there's a certain amount of um well there's this whole idea of embeddings these are ways to sort of Turn Turn words into numbers and the the idea is that um if you have a sort of a good embedding then words that are similar in meaning will correspond to collections of numbers that are nearby so you know something like I don't know elephant and rhinoceros might have a sequence of uh well it might be let's say a thousand number us that um uh the thousand numbers that represent elephant are fairly similar to the thousand numbers that represent rhinoceros but they're completely different from the thousand numbers that represent you know Jupiter or something like that um and so so the first thing is you gotta you've got to grind the words up turn them into numbers then those numbers are used to determine kind of the the intensities of this first layer of neurons and then you go through a sequence of layers so for chat GPT I think it's a few hundred maybe 400 layers and so what's happening is the sort of the the data from the thing that uh at the you know the initial numbers are kind of they go into the first layer neurons then they go through these weights they will get multiplied things fire you go to the next layer go to the next layer and so on and when you've gone through those you know I think it's about 400 layers um you get to another collection of numbers and that other collection of numbers then gives you essentially the the the probabilities for a setup for possible words that might follow and then you have to decide well which word are you going to pick are you going to pick the word that was most probable according to the statistics of the web so to speak you're going to pick the word that was second most probable or whatever and one of the kind of there are many pieces of kind of slightly black magic that go into making one of these systems really work well there's sort of the if you always pick the most probable word then at least for writing like English essays that tends to be it seems rather monotonous sometimes it just repeats itself all kinds of bad things like that but as soon as you pick uh sometimes the not top probability word and the sort of a parameter the temperature parameter that determines kind of which how far down the ranking words you'll pick that seems to lead to a more lively result I should mentioned one other thing that's sort of a critical piece of what's what's worked in something like chat GPT is this idea of Transformers and so the question is when when you have the words that it's already written what do you do with those words how do you feed them into the neural net and the the question is the one thing you kind of know about those words is they're in a sequence they're not just oh the these different words in different places they are and so what happens is the um the the neuron that kind of learns it knows you know given that we're going to add the next word it says well the word three back has this level of importance the one five back has this level of importance and so on and then it combines in a very kind of uh sort of bizarre way it combines uh multiple different sort of uh patterns of how it pays attention to previous words and uh uh does the whole thing multiple times and out of all of this comes uh comes the results for once now one question is okay so that's sort of the setup of how given that you are feeding in a prompt you're feeding in text how it will determine what text to write next next question is well you've got this whole neural net and it's got all these weights and you know in chat gbt it right now has 175 billion weights um how do you determine those weights you know any collection of Weights will have the property that you can feed words in and some words will come out problem is if you if those weights are picked at random the words that come out will just be complete nonsense the question is how do we pick weights so that the thing kind of conforms to the statistics of the web and so that's this process of neural net training and essentially what you do is you kind of you say well here's here's here's some text from the web and we know what the next word is but the neural net doesn't know what the next word is so have the neural net sort of guess what the next word is and then it might get it right it might get it wrong but typically it will start off getting it wrong and then you say okay how would you have to change the weights in the neural Nets to make the word that comes out be closer to right than the one that actually came out and so you iteratively do this that's the training process it's just kind of tweaking all those weights and there's a mechanism called back propagation that kind of helps you make it not be an absurdly mathematically difficult problem um to uh to figure out how to tweak the weights so that you'll actually get the thing that you know so so you're kind of training it on you've got a piece of text you're kind of masking out the words at the end of the text you're trying to training it so that the weights are such that the words that are at the end of the text will be the ones that when you took the mask off will really be the ones that were there so so you find some high quality piece of text here is the Wikipedia page let's assume it's high quality and it's been vetted on China and it just starts reading it it gets the word wrong you don't punish it but you tell it hey you got it wrong and then until it gets to where it's right it gets a cookie or it gets uh punished in some way well the trick the trick is that you're kind of it's like an evolution process like biological evolution or something you're kind of gradually adapting it to get closer and closer to the right answer and there's a systematic way to do that and that's what that's what the training process ends up being and you know the the the fact is it's trained on a trillion words so it's trained on you know this whole process if you only trained it on a million words well it would be able to learn some things like uh you know you know you follows q and things like that but it wouldn't learn the things that make it seem like a meaningful you know essay or something of this kind and that seems to require you know an amount of of text that is uh you know reasonably that's that's about what we humans have produced and put and put out in in kind of publicly accessible form and it's it's also the number of Weights that you need is sort of roughly comparable to the kind of number of words that you read in your in your training set nobody really quite knows why that is but that's another sort of a random fact so to speak listen it's 2023 the macro picture is a little shaky it's uneasy out there and Tech is getting hit super hard as such you cannot afford to lose sales for silly stuff like not having your sock 2 right now if you are unsure about your stock too you need to check out vanta vanta makes it incredibly easy to get and renew your sock too on average van to customers our stock 2 compliant in just two to four weeks compare that to three to five months without vanta huh and they partnered with over two dozen audit firms who have been trained to file sock 2 reports directly within vantum this is a total no-brainer a bunch of my portfolio Founders have used vanta and they've had amazing experiences and if you don't have stock 2 compliance you can close major customers one major customer that could be the difference between your startup thriving or going away so get it done right now that is going to give you a thousand dollars off because you'll listen to this podcast think about it one thousand dollars off vanta.com twisty you gotta write that down put it in your notes v-a-n-t-a.com twist for one thousand dollars off your sock tube when we compare it to what's happening in a human's brand and I know we actually don't have the answer to this yet we don't understand what Consciousness is exactly we have theories and ideas um but when a human is asked hey what are the most popular desserts in America and when the neural net is asked and chat gpt's ass or whichever version of it is asked hey what are the most uh you know popular desserts in America what when you look at what happens at a human and then we look at what's what we know is happening in the software how close are they are do we actually think we're emulating what happens in a human brain or that we've developed a new process that is similar but maybe not exactly and what is that overlap if there are two Circles of Consciousness and answering questions versus a computer answering it how much do they actually overlap well I mean it I think what's happening in our lands is fairly close to what happens in brains there are some things missing in current llms you know brains have this in in a computer that's typically you know the CPU GPU it's processing a lot of data and then there's memory and most of the time the memory of a computer sits doing nothing just sits there storing what it's storing in human brains every neuron both computes and stores things so we have a little bit of an advantage at least for right now in that regard also the way that something like church EPT works is it just feeds forward you know you feed it in the prompt and it'll you know it'll kind of Ripple through the neural Nets and they'll say okay the next answer is this in our brains we're pretty sure that there's some kind of feedback loop and maybe the feedback loop is similar to the one that chat GPT effectively has or an llm in fact effectively has where it sees the prompt it's got so far it adds a word that becomes the new prompt and it can kind of kind of feedback that right so when we ask this why question hey what are the most popular desserts in America and I the first thing that comes to mind is ice cream and then ice cream triggers me to think well apple pie of course and then apple pie and ice cream trigger whatever the next thing is yeah I mean we kind of yeah look I think that the the thing to understand is when it comes to sort of a computational process like how brains work it's there's a lot of detail that in the end doesn't matter and it's just like saying if you want to fly do you need feathers do you need flap you know or do you just need Wings turns out you need Wings pretty much and that's your a drone with rotors but there's the little wings that happen to go around um but uh but you know the details of feathers and so on don't turn out to matter and so you know in the crates of brains our brains have a lot of detailed sort of stuff in them that's you know things like the you know the glucose that's that's supplying energy to the neurons and things like this which is obviously different from the electronic case but at a sort of computational architecture level I think it's it's surprisingly close and and what's sort of remarkable is people have kind of known roughly what this is like for you know what is it um you know 80 years or something um I mean this is uh that part is sort of unsurprising now I I would say there's there's well there's a lot more to say about how how a computer works as compared to how a neural net the sort of brain-like neural networks see the thing about computers and computation in general is it can sort of It kind of goes much deeper than what a neural net can do because what happens is you have some computer it can for example it can go in a very sort of tight Loop figuring out what's what's uh you know the result of a computation none of that stuff is happening in something like you know an llm like neural Nets it's just Rippling through saying what's the next word and you know it just ripples through and so on and there's this kind of concept that's sort of a a concept that I kind of invented in the 1980s called computational irreducibility just kind of the the feature that of sort of deep computation because you know you say okay what is the computation you have certain rules and you're going to just apply these rules over and over again and you see what the results are those rules might be the sort of the the way the CPU or the computer is set up they might be some rules about black and white squares or whatever else but the way it works is sort of the essence of the computation is you just keep applying these rules over and over again you see what see what comes out so the question then is you're applying these rules there's a certain number of times you have to apply the rules to get a certain result the question is can you jump ahead and see what the result will be more quickly than just following all those rules and what turns out to be the case is there are many situations uh where you can't jump ahead it means you have to do this computation if you want to get the results you have to actually go through the steps of the computation and when you have computational irreducibility you basically the neural net is it's it's too shallow to be able to deal with that you know it can go a certain distance like if you ask chat GPT right now you know match parentheses you've got open parans open open close close open close whatever he's trying to make it just make sure that the number of close parenths you know the close Brands match the open parens it can do it up to a certain point and then it sort of says well it doesn't say this as such actually I haven't asked it but um maybe I should why it fails it might have something interesting to say but basically it's just sort of run out of layers of neural net it just can't can't represent that deeper computation and so there's this sort of in there's this world of computation which include irreducible computations computations that you just can't shortcut you just have to do the computational work and then there are these shallower things that are what we humans are using most of the time when we're generating language probably a lot of the thinking that we do works that way and so there's there's sort of a difference between how how you can do things in principle with computers and how things work in something like another land and you know you might say you know do you care about irreducible computations well the answer is for example in nature many things that go on in sort of the physical world if you want to work out how they work you kind of have to do irreducible computations those things weren't sort of made for humans so to speak I mean our language and things like that has sort of made for humans in some sense but nature just is what it is and it can to work out what it's going to do can involve these irreducible computations and then it's up to us to try to simulate them in some way or or to try to figure them out here in this yeah let's talk about emergent Behavior here like are we projecting into it that it's learning in some way or evolving in some way at this point or is it truly with so many people using it now and the reinforcement learning that's happening and then all these plugins putting in do we get the sense that the model is learning at some faster Pace now and that this concept of hey maybe we're not in control of it do you believe that that's kind of the moment we're in right now because as you were saying earlier like this thing is kind of surprising us right now so I'm kind of wondering what the next surprise will be let's pull that apart a little bit so first of all sort of emergent Behavior one of the things that that typically means is you put certain rules for how a system works and what the system does is much more complicated than the rules that you put in that that's and that's what happens in irreducible computations all the time that's kind of there there that's that's kind of the thing that probably makes nature seem so complex to us is it's full of these irreducible computations where the rules are quite simple where the actual behavior is is uh seems to us very complicated so that's it now the question of what alums are doing and to what extent that sort of an emergent thing first of all just to to clarify one thing so in the in the present state of things the actual little chat sessions that people are having with these LMS yes they're being stored they will be used for training but it's not an immediate Loop that's not that's not something that's that's been done technologically as an immediate thing um it's it's more of a sort of a long-term process so it's not like you know every person who types into it is getting smarter and it's going to you know it's going to take over the world as a result my chat with it is independent of your chat with it it has threaded chats together so it's learn it's applying the model in each of our individual threads but if we both started asking it about desserts it wouldn't suddenly be like oh wow two people on two different coasts of the United States you're talking about dessert and let's pull that all into our knowledge but that is coming obviously yeah yeah but that just doesn't happen to be here yet I mean that's just a technological that's a that's a technological privacy you know policies Etc et cetera et cetera kind of issue but that's um I think an interesting one actually I just I've never heard anybody actually have this conversation but what is the ethical right thing to do if a hundred people right now are talking about you know uh this new pandemic that they're seeing and it's trying to put together that information to maybe warn us a pandemic is actually happening there's 100 people talking about it in this region and they're spaced out at this distance and this is the qualifier of of a pandemic starting right I mean this is something obviously one's already seen you know from things trending on various you know social media and you know such search queries and things like that that's already the thing a bit like that I mean the question of uh you know how how private is your particular chat session and your particular you know I don't know psychological counseling session with the with the chat bot or whatever else who owns that right well versus what I mean it's the same thing that happens in in your medical stuff all the time which is you know to know in aggregate what the results what what happens medically to lots of people is a huge societal value let yet you want to keep the individual records of individual people private so this kind of you know you want the aggregate to be something that can be mined but you don't want the individual things to be separately minable and that's a whole whole technical kind of worms um about how you can do that and to what extent you can do that and so on I think the same thing will will probably happen here but I mean coming back to this question of um of sort of what why does the llm work what is it really doing in what sense is it emergent what's you know what's going on I think the thing that is probably the for me sort of the the biggest kind of aha feature of what we've seen with chat apt is is the fact that probably language is not as complicated as we thought it was I mean language is kind of the Pinnacle of our species is uh sort of collective achievement in some sense and so we think it's a very sophisticated complicated thing but we already know there are certain rules of language like we know you know syntactic grammar we know you know a typical sentence has a noun a verb an ad you know a noun it might be an adjective and a noun things like that we know this kind of these kind of structural regularities to language well I think what's happened is that that uh in these llms what's been discovered is that there are many more regularities in language that we had sort of classified before so there are many it's kind of like language we know from sort of the structure of sentences about nouns and verbs and so on we know that sort of a construction kit the sort of puzzle pieces that you can put together you can't go you know verb verb verb that's not a possible sentence you know it's it's got to be a noun verb noun type thing or something like that so we know that there are these sort of puzzle pieces you put together and I think what's happened is that there are sort of what's been discovered by llms in fact is that there are uh a whole collection of other puzzle pieces that don't just deal with parts of speech but they deal with little fragments of meaning and language and there are things that can be put together meaningfully and they're ones that can't be put together meaningfully and you know we have we have one example historically of where uh this kind of thing was discovered was discovered two thousand years ago which is the idea of logic which was presumably discovered by Aristotle and you know in a sense Aristotle was doing a humanized version of sort of machine learning because what he did presumably is he took all these arguments that people made all these pieces of rhetoric and so on and he said what's the pattern of how arguments work you know if you say all men Immortal Socrates is a man therefore Socrates is Mortal that's a certain pattern you don't have to be talking about Socrates you don't have to be talking about mortality you could be that you could substitute in any kind of any kind of thing there but that structure is a meaningful structure that you can put into something that you say and he lifted from that this kind of idea of logic of you know ands and oars and Knots and you know this implies that and so on and uh that that that becomes one of these kind of semantic regularities of language that's one that we know there are a bunch of others I think and the llm has basically found them and we've been a bit negligent in the last couple of thousand years not looking for these things I mean there was a little burst of interest in the 1600s but it kind of died off and then people had I think people kind of thought it was too hard and they were a little bit proud of the fact in the 1950s it became clear that this kind of grammatical structure of nouns and verbs and things how that worked in lots of different languages and people kind of excited about the way that that it had been figured out that that worked and so they didn't really look for these other things in in a serious way but I think uh so that that's kind of the um and I think that's what's sort of a science fact that was discovered and in the end once you know that that's the science fact it's sort of puzzle pieces being fit together it all seems a bit less miraculous so to speak um and so we're figuring out or if the models figured things about language that we just maybe haven't been looking for and we as humans with language as the Pinnacle of our existence whether it's poetry or science or you know any number of Arts or debates it's kind of how we mitigate the entire world it's how we make decisions these debates that occur presidential debates congress senate at your dinner table who are you going to vote for how are you going to raise your kids we maybe have valued this as something super magical but with the Corpus being actually um kept somewhere the internet and then uh the ability to process it so quickly with these new gpus it may have just figured out hey this actually isn't all that complicated right I think wild what we learn is that sort of the the essence of meaning is something that is which which is the thing that we represent with language as sort of a calculus in a sense a a formal structure of how meaning works now the fact is some aspects of that well somebody like me or perhaps me in particular you know my my lifelong project basically uh has been sort of figuring out how to make things computational and one of the things that you know is my kind of long-term project is to make a computational language a language that can represent things in the world in a sort of precise formal computational way and that's what the thing we call from language is uh it's uh it kind of started off as Mathematica and kind of evolved into open language over the last 35 years um but the kind of the idea there is to take things in the world like I don't know two cities and you're asking you know what's the distance between them or these kinds of things and have a precise kind of formal representation of those things that is sort of both writable by humans readable by humans readable by computers executable by computers and the fact that I mean it's been my kind of last 40 years basically I've spent building up this kind of language to represent things computationally and in a sense that uh the language represents a lot of kinds of things that are very useful to talk about in the world it doesn't happen to represent kind of everyday Chit Chat type conversation um but uh you know and chatripty has sort of added that as another element of kind of uh something that we can see how it fits together with language but but you're going to say no you're going to say no I mean the the thing that um you know people ask for example does chat gbt understand what it's talking about well uh it just has these rules that say how the next word goes in it doesn't you could I mean that's how we work too probably and you can ask do we understand what we're talking about so to speak and uh there isn't that there's you know but but it is in a sense doing a very shallow computation kind of the idea of computational language is once you have something represented in computational language you can kind of go all the way and compute whatever you want with it you can do irreducible computations you do all sorts of things and so you know the thing we did a dozen years ago with with wolf malphar was natural language understanding where you go from small fragments of of human language to computational language and once you can do that that that's a sense in which you have true understanding you've got natural language you turn it into computational language once it's computational language you can compute anything you want from it so that that in a sense is is true computational understanding so to speak and that's a different thing from what sort of a raw llm deals with and that's by the way what what the uh you know the plug-in that we just worked on with openai um you know the wolfen plugin for for chat GPT that's what it's achieving is being able to connect this kind of llm layer to uh this sort of uh what we might think of as kind of computational Bedrock of um uh of what one can compute from and uh you know that that's uh there's all sorts of implications of that did you know that today is a major holiday in the tech world that's right it's World backup day March 31st so we have a few reminders and tips from the folks at clumio first make sure your data is protected obviously remember because your data is in the cloud doesn't mean it's automatically backed up and your data is your responsibility right think about that it's not your Cloud providers it's yours second don't let backups and compliance requirements distract you your engineer should be focused on product Innovation not compliance audits third take control of your Cloud costs you'd be surprised how much of your storage comes from backup snapshots and replication that you're doing with your expensive cloud provider plumio will help you with all three of these points with a turnkey data Protection Service that's air gapped immutable and cost optimized they have saved customers listen to this over 30 on backup costs wow putting security and compliance on autopilot so you're gonna you know save money and You're Gonna Save time what's better than that I love these kind of services whenever I'm looking at business to invest in I'm like does it save people time does it save the money and does it make them laugh or entertain them I'm not sure if Lumia is going to entertain you but it's going to save you time and money that's two of the three major business models in the world two of the three great value propositions for consumers save them time saving money clumio all you got to do right now is go to clumio.com twist to start a free backup or sign up for a demo that's clumio.com twist write it down now if you were asking hey what's the distance between these two cities or what are the similarities of this elephant and rhinoceros uh oh both mammals both gray skin color whatever both formidable whatever the words are that are coming up Tech GPT actually seems to be getting things wrong if you ask it for numbers or equations it doesn't seem to get it right very often and so is the idea here chat GPT can start discussing and maybe summarizing what's the difference between these two things or the distance between these two cities or the difference between these two cities but then Wolfram could actually give the correct answer computation you know we didn't know how well this would work but it actually works rather well I mean we have very conveniently we have I mean in the in the technicalities of the plug-in it has two different endpoints inside it one of them is going to Wolfram Alpha Wolfram Alpha takes natural language input takes small fragments of natural language and well from language is a precise computational language and sometimes what chatric is doing is taking this big lump of text that somebody might have given as a prompt or the thing it's trying to write and it is does surprisingly well at crispening that up to the point where it's either a small fragment of natural language that can be sent to often alpha or it's a piece of language code that can be sent to the orphan language interpreter um a tricky thing that happens is in both those cases particularly the northern language case sometimes it gets it roughly right but it isn't exactly right but then we actually run the code and we can see what happens and then we tell chatgpt well it didn't quite work why don't you rewrite it and it does and so it goes through several fascinating so just a simple thing like what's the distance if you ask well from alpha everybody probably knows this who's listening you asked the distance between Los Angeles and London it's going to give you a really broken down tight answer but if somebody were asking that in a less precise way um we'll friend the the plug-in could then mitigate kind of the wordiness yeah if you wrote a very poetic uh description of what you wanted you know wolf Martha was built for people who kind of have a question to ask they write it in a natural language way but they're kind of direct they just ask the question they don't say you know I'm having a whole thought about you know going from here to there on an elephant and I'm wondering you know how many you know steps does the elephant have to take and this and that and the other um what chat GPT does pretty well is to boil that down into something which turns into distance between this and that divided by stride length of an elephant wow that that type of thing I haven't tried that particular thing I'm not sure that that particular thing with elephants I'm not sure about it but but uh um you know the other part definitely But but so you know the other things that can be done once you're Computing you can do things like have chatri PT produce you know call the orphan plugin generate Graphics you know do we have lots of real-time feeds of data so do a histogram do a chart whatever it happens to be right whether in some particular place where it's the current weather and you know or the current stock prices or whatever else so it's um you know it's able to and it has a sort of precise computational way to kind of and figure out what what what to say about the world so to speak you know what the alarm does very well is to take this complicated massive natural language boil it down into something that becomes a sort of a precise thing and then then it takes back the results and sometimes it'll just generate a picture that comes straight from us but sometimes it'll knit back the results into the essay that it's writing there's another workflow that's really quite interesting right now which is uh you know we only learned this workflow in the last two weeks so it's it's very fresh it's crazy right now right I mean if it's amazing how when everybody in the world becomes enamored by something and says oh let me try to break it let me try to fix it let me try to you know stress test it it's really incredible what the hive mind of just consumers and scientists developers and everybody in between trying to break this thing or jailbreak I think that the big thing right now is to I think one big thing is just understand workflows understand use cases yeah and understand kind of how to think about what to do so so like for example uh you might um you know here's the thing that people have discovered you could say give some results and you can say you know do you think that answer is right and it turns out then it will it will uh uh that question turns out it's better at answering that question probably than generating the answer in the first place so nobody knew that was going to be the case the other thing that's just totally bizarre is the the whole business of prompt engineering of being able to say you know uh things like if you look at the The Prompt for the Wolfram plugin it's just it's it's you know we've been we've been steadily adapting it but it's full of you know we put pleas into the sentence and that makes a difference we've put wild you know don't do this do that don't do this here are examples of what you should do the fact that any of this stuff works is is really remarkable and I think we're you know the sort of theoretical description of how the neural net Works we're pretty far away from being able to say given that theoretical description this is how you should put commas into your prompt or whatever that's a that's a that's a big distance at this point at this point prompt engineering is it's kind of a bit like animal wrangling I think it's kind of like you don't really know is this animal that's flapping around and it turns out if you pull all its ear it will do this and and we don't really know trying to get this mustang and you're trying to tame it yeah be careful uh but yeah walking up to it quietly and like we're taking steps and just trying to get the Mustang and Corral it and get it to put a saddle on it maybe it's gonna work maybe it's not it's pretty amazing I always think and then I'd love to get your thoughts since you've you know basically helped create this category here on the impact on society humans uh and and how quickly that happens because this feels qualitatively different the pace that this is happening then I don't know automation of software there was this concept oh my God you know TV comes out everybody's going to get a PhD because you can just turn your TV on you have all this free time you just turn it on you're going to learn or oh Wikipedia came out uh the internet's out and oh mit's putting every course online Coursera you know this one start X everything okay everybody's gonna be able to go to MIT or Harvard it turns out well human motivation is such that maybe everybody doesn't want to take the time to take all these courses that are freely available on YouTube today which is just mind-blowing for a gen xer who thought wow whatever they're teaching at MIT in Harvard that's like locked up with this Ivory Tower now it's literally available for free and it has 300 views on YouTube right now instead of three billion so what happens now in society uh realistically when a whole swath of things that people are getting paid for copywriting is one example journalism is another example certain aspects of Journalism research and then design I was reading a Reddit thread recently somebody said they went from spending three or four weeks to make a character uh in a video game and now it takes them two or three days but they kind of feel bad about it because it's not as artistic but they're going to be able to make characters in games you know already ten percent of the work effort which means like you're not going to need as many designers logos whatever it happens to be is this does this concern you or are you in the camp that humans always find more work to do because this seems to be moving at a faster Pace that anything we've ever seen you know I I actually just sort of was curious so I kind of studied what happened to jobs in the US over the last 150 years and you know things happen that are fairly dramatic and you know in technology but actually it takes a generation before it fully Works its way through the system and you fully see the effects but I think here the things that uh happening is there's there's a lot of kind of cases where there's sort of semi-boilerplate text that people generate or the selling semi or there's text there's so semi-boilerplate that people have to understand there's a bunch of people who do that and this is a you know this is a really good way to do it so how will it actually work I mean so let's say you are filing some you know you're writing some proposal you're filing some you know compliance type thing whatever else you have certain points that you know you have to make but dressing that in a whole giant essay is something that you used to have to do it used to take a lot of human effort to do that yeah you just say here are the main points I want to make go make an essay out of this using sort of background foundational facts and it'll make an essay and then goes over to the other you know whoever's going to read it and well they might actually read it as a human or they might feed it to their llm which will sort of grind it down and they will have given their llama prompt that says look for these kinds of things and so they will extract the information that they want from that which again might turn into you know three bullet points and so it's kind of like so what this is is it's it's like an interface it's like we had you know graphical user interfaces I I just started calling these Louise linguistic user interfaces yeah I like it Luis it's Works um um because I mean it's kind of like the you know it is a convenient transport medium you know an essay is a convenient transport medium for information particularly when the two sides aren't really quite aligned I mean if it's like fill out this form check this box check that box then then you can easily sort of transfer it from one side to the other but when when you know each side doesn't really quite know what the other side is looking for this is a convenient way to transfer information now that that means there are there are people and professions that have been that are quite kind of knowledge worker type professions that people have assumed are like oh nobody's going to automate these knowledge worker type professions yeah it's not possible right right but that's too much human judgment yeah right turns out turns out that that's not true and it turns out and so you know and one of those areas is well for example one area is programming where you know I have to say if people had paid attention to stuff we've been doing for the last 40 years they wouldn't be in this particular pickle because you know the whole idea of the computational language that we've been building is that all of that boilerplate stuff that exists in lower level programming languages we already automated that you know when you say you know Geo distance or something between two that is we've already automated all that stuff about you know pulling that long from databases and figuring out you know the the you know spherical geometry of blah blah blah blah blah um all that stuff which you know you write it in I don't know Java python whatever else it's a big slab of code or maybe you pull it out of some Library here that doesn't you know interface with some Library there this is this has been kind of the the low level kind of manual labor type programming that now you know it's not to say that there aren't millions of people who use our computational language so I that this is um and none of this applies to them because they already know how to kind of do things at this higher level but there's an awful lot of programming that has been done using programming languages and you know one thing to to make clear is that you know what is a programming language it's a way of kind of letting a human telecomputer in the computer's terms what the computer should do you know the computer has a memory you can make a a you can have variables you can do this but those are things that are sort of in the computer's terms kind of the the whole idea that that um you know I've perceived for the last 40 years or so is to have a language which is kind of a a bridge between how we humans think about things and what can be done computationally so we're kind of representing things at a human level rather than at the level that happens to be convenient for the computer there's a lot more work for the people who build the language to do that but that's what I just spent a lot so doing when we look at it is this going to be a slow change like I remember when I got my first Loft in New York and it was the 90s and they had manual elevator operators they would take you to your floor in this old building and I remember the you know 10 years later they yeah they got rid of them and they put in automated elevators elevator operators as a concept took 50 years to kind of deprecate over time I think there's like a couple left in America the Hotel Del Coronado in San Diego famously kept their old elevator and their elevator operator because it's Charming or whatever ice Cutters I think I've been in that hotel yeah maybe I even know it's like an old guy who's sitting in there it's the one from Some Like It Hot the famous film and it's um it's quite Charming but ice Cutters refrigerators switchboard operators you know operators generally lamplighters all this stuff has gone away but it took time so when we look at this does this feel like programmers are going to become 10 times better and yeah we'll just get more accomplished in the world why does it feel like this is going to wipe out swaths of jobs really fast and then what do you think that does societally I think some things will go fairly quickly in this particular case not only because the technology exists to do it but also because the sort of societal attitudes and and sort of oh this is going away so we'll make it go away even quicker because we can kind of already see the future um my guess is that some things will happen reasonably quickly but you know it's always the case that things I don't know in my life I've I've had the good or bad fortune or something to invent a bunch of things that end up being many many decades ahead of of the current time so to speak and so then it's maddeningly slow how how quickly you know modeling is slowly things actually get absorbed I think this one because of the kind of momentum that exists right now I think I think some of it will go quite quickly now you know what does that mean you look at the pattern of what's happened in all previous cases let's say telephone switchboard operators you know the fact that telephone switchboard operators existed was a consequence of the fact that telephones existed which was a technological advance but then automated switching came in and you didn't need a manual telephone switchboard operator but what did automated switching do well it enabled basically the Telecommunications industry and that has generated just an immense range of jobs I think one of the things you see seems to be the case is that you look at America you know in around even 1900 was still uh well 1850 it was it was more than half agricultural work yeah and you know the pie chart of what people did was very you know it was the big wedge of Agriculture and then a few other wedges and they were all quite big if you look at you know today it's much more sliced up that you know the pie is in much smaller pieces and I think that's a thing that one can expect to see as sort of more automation happens more things become possible there are more niches that people can fill so to speak and I kind of think that what tends to happen is when one of these sort of steps of automation happens it enables things that and it then enables more diversity in what people can do it isn't because people aren't all just pushing you know pushing the plow or whatever for for agriculture it's like okay now we've we've got that done so now let's look at what's possible and I think the thing to realize about fascinating the the you know kind of the interplay between you know AI automation humans you know you've got a raw AI it does its neural net thing or whatever else but if you say to the AI you know what is your goal in in existence so to speak it you know it has no intrinsic answer to that question we humans think we have an intrinsic answer where does that answer come from it comes from the whole sort of web of History it comes from our biology Etc et cetera et cetera but we are pretty convinced that we have you know we have definite goals we want to do this we want to do that those goals tend to be things that are intrinsically coming from humans the the how the goals get achieved that's where the AIS and Automation and so on come in so you know you you and what I think you see happening is that when there's a big sort of enablement of things what becomes important is what can you do with that enablement I mean we were talking before about kind of use cases for llms it's like okay now we have LMS now we've got to figure out which use cases do we care about and that's sort of an intrinsically human activity because there might be lots of you know an llm could just go spinning you know random words out and so on and it might it might in some weird sort of anthropomorphizing of of the thing it might have a very happy time just spinning random words out humans look at it and say what the heck is that we don't care about that yeah because you need a jockey gonna need a pilot right right I mean you you kind of yes you need to you need to kind of Define what the what direction what the objective is so I kind of think that that's I'm sort of you know what you see over and over again is something gets automated that enables a lot of other opportunities um sometimes uh and uh you know that's that's been the pattern now you know it's kind of like the question of well will that come to an end kind of like will everything that could be invented eventually have been invented well we actually know from sort of theoretical science considerations actually related to computational irreducibility we know that in in sort of a formal sense it will never be the case that there's no more to be invented there'll always be unexpected things that you can figure out that you can invent so so in principle there's no limit to what could be invented the question is it could be the case that we humans will say hey we're done now you know everything that we care about right the the you know everything we care about it's been invented right you know we're good from here on out actually that wouldn't work because it turns out the world the natural world and so on will continually throw up unexpected things that we'll have to respond to so it won't we won't be able to get into that kind of oh we're done now but you know in the situation where we could say we're done now then yes it could be the case that everything that we care about has been automated by AIS other forms of Automation and so on and sort of then then we could would be in oh there's nothing for the humans to do anymore um but you know I think that for both theoretical science reasons and practical reasons I don't think that's what's going to happen yeah what would you when we look at this paradigm shift that occurred we had agriculture factories knowledge work now knowledge work seems like it's going to be automated so you know we we put robots into factories we put robots and automation into the fields or Agriculture and factories you know we don't need as many humans involved in those things and knowledge work we probably won't need as many humans involved in it so then what if this is a true Paradigm Shift what is the post knowledge work error going to be is it going to be prompt engineering what is it what do we call this new era where anybody can talk to a chat interface and create a product or service in the world that maybe accomplishes or solves some really important or pressing a problem right look I think that it really reflects on you know what we humans do and on a special about doing and that might be uh it might be thinking you know one of the things about knowledge work is it turns out and you know the education sort of directs people this way there's procedures for doing lots of kinds of knowledge work yes it requires sort of analytical steps but if you say big picture think about stuff that's not what the typical knowledge worker is trained to do and you know I think that's a that's a a great sort of uh intrinsic human thing is just globally think about stuff and and that's something where I think the value of that is going to go way up I think the value of the kind of super specialized siled knowledge is going to go down because that stuff you know you can drill pretty deep into a silo using automation if you know the kind of the overall way to think getting deep into that Silo is something that is is now much easier than it than it once was so I you know I I kind of tend to think that the uh you know other things that are kind of um oh I don't know whether it's uh other in a sense more creative more kind of um uh things that are in a sense more arbitrary more more human chosen like thinking we could go in this direction rather than that direction we could come up with this you know cool uh you know uh sort of routine or whatever uh that that's that you know that entertains people or whatever these are things which are sort of much more arbitrary they're not things where we say you know here's the end point now just go fill it fill in that end point in the best way and I think you know quite a bit of knowledge work has has ended up being something that is kind of we know what the end result is more or less we know where we're going now just fill in the details so to speak and I think like a journalism job or a a legal job it's it's kind of road it's like okay who what when where why okay talk to a couple people we got one side of the story see if you can get the other side hit publish okay lawyer what do you want this agreement to say what do you want to happen if people break the agreement okay we're done and what you're proposing is maybe this next era um would be the creative era of humanity no I don't know maybe it's the Judgment base I'm trying to come up with the right word but it seems like an era where human judgment and creativity is the driving force not the rote knowledge work I think that's that's a good possibility I mean I think that the uh you know I tend to be I suppose generically an optimist and I kind of look at the pie chart getting more and more fragmented and I think about all sorts of different people who have all sorts of different skills and I think about the fact that you know for example in my own case right I've spent my time doing science and computation and some technology and so on and uh and and companies and things like that um and uh you know if I'd lived at a different time in history the things that I've really had a good time doing just wouldn't have been available to do and you know that wasn't that wasn't part of the pie chart back in 1850 you know uh computation and science around computation wasn't part of the pie chart of things you could do and so I think you know in my kind of optimistic view of things it's kind of that you know there's there's more pieces of the pie there's more different things that can be done and there's more you know for different people who have different interests and skills and so on there's there's more that can be can be sort of explored now you know I think that there are when you ask uh kind of by the way I mean they're they're just there are sort of there will be lots and lots of new job categories I mean we just got prompt engineer we're going to be a podcaster having conversations professionally in a vertical of something you're passionate about and then the fact that I get to do that for a living just I mean there was Charlie Rose that you know there was Oprah but the idea that now that there are um probably a hundred thousand people making a living and just doing podcasts and then hundreds of millions of people listening to them is mind-blowing it's like a little slice of the pie that nobody ever considered no that's right and we've just got you know we just got prompt Engineers love we're going to have ai Wranglers we're going to have ai psychologists um you know you're going to have a whole bunch of new categories um that uh and I think that is that is just incredibly typical of what you see happening with with Innovations particularly automations is uh because we're watching this all happen in a chat interface not scary at all um but I guess people just wrote a signed a petition hey maybe we should pause this I think that was is largely I don't know if you saw it but this you know future of Life petition um I think it was largely ceremonial like just probably worth less considering because I don't see anybody stopping at their work for six months I don't I don't see anybody stopping I think it's a the the cynic would say it's a list of of uh of people and places that feel like they're getting left behind and want everybody else to stop for a while while they catch up but yeah so there is that would be a cynical approach it or just hey I I know that this isn't going to happen but I just want to have it on record that I says yes this might have been a good time to be more thoughtful uh but let's talk about being more thoughtful um do you think we are getting to a point where unintended consequences are a possibility again the pace you and I haven't seen anything always unintended consequences I mean of almost anything you know who thought that you know doing research on virology that did this or that or the other would lead to this or that or the other thing you know good or bad but uh pandemics yeah for example don't say it or this podcast is going to get flagged if we actually speculate that a human created covid seems probable right but but um well maybe it was an AI no I don't know maybe it wasn't I mean I mean at that level the AI wasn't quite ready to do that now it would probably be an AI I mean let's talk about that for a second open-mindedly here if you were to put uh some prompts in and you put in the sequence of covid which isn't really a difficult thing to sequence and said come up with things more deadly or come up with things that you know instead of affecting old people affect young people that have a longer incubation period so they're harder to recognize or stop AI could do that today and I you know it's a little complicated because it turned out one of the things that's totally bizarre is that large language models are actually useful for understanding the structural proteins yeah something that has nothing to do with human language it is however you know what happens with proteins you know proteins are these long strings of amino acids which are these kind of collections of atoms and you know every every protein is specified by some piece of our DNA our genome and it's you know a protein is a string of thousands to to millions of of uh of amino acids and actually no they don't usually get as far as a million amino acids but but um you know it's a long string of these things and then they fold up in certain complicated ways it's been a long time problem to figure out given the sequence how does the protein fold up it matters a lot how the protein folds up because the the way that proteins actually have significance for biology has to do with their shape and so you know is there a particular hole in the protein that where some some particular you know other molecule can fit in that hole or not does the protein kind of uh you know knit itself together to make a muscle you know all these kinds of things so it matters what the shape is and so the the question is given the sequence can you predict the shape so then the uh that's been a long time problem that was uh a lot of progress was made on that using well initially not quite large language models but now large language models but really what's happening there is you take the protein where you take the sequence you want to figure out what kind of shape its protein is you you then say well this piece matches this protein that we've already studied this piece matches another protein this piece matches another protein now let's figure out how to knit those pieces together and then knitting those pieces together is something that's a little bit like this kind of puzzle piece thing that I mentioned for human language it seems that knitting together is something that llms seem to be quite good at and so then you can do the the more extreme thing that people have started to do which is to kind of use generative AI to say you know given a bunch of words make a protein that does such and such so yes you know the thing you're describing I don't know there are lots of issues and there are lots of computational disability questions actually but in in Broad outline yes it will be possible for sure to say you know take this and you know with just a linguistic type prompt you know find something that does you know that works a little bit differently and so on and uh you know it pulls in perhaps something from some other uh you know genome database or whatever else so yeah I'm sure that will that will be a thing that um unfortunately perhaps that will be about maybe fortunately in some cases and maybe unfortunately in others and that's that's a prompt engineer and what their goal is right right right but I mean that's so typical of progress of all kinds you know you can you can use it to you know cure a terrible disease you can use it to make a terrible disease all right you can make a nuclear reactor and make a nuclear bomb and yeah right just teamed like that you know in those examples people didn't have as much access to a tool and this tool feels like it's going to have everybody's gonna have access to it put a couple billion people on this thing that is qualitatively different than the number of people who know how to operate you know and do nuclear science yeah well right it's also the the materials you need to make uh nuclear stuff uh nothing such easy Supply right there's a long supply chain to produce them yeah so so this is certainly much more accessible I think we just talked ourselves into signing the six month ban so many scary possibilities here it's almost like talking about them is I don't want to accept them in the world but you know it's well I think the thing to understand is when one thinks about okay so what are the AIS going to do first question is what do we want the AIS to do you know if we were going to define a system of Ethics let's say for the AIS what would we want that to be so you know one thing people would say is well you know let's have the AI it's just moment what humans do most people would say that's a bad idea you know humans do all kinds of things that humans we don't think humans should be doing yeah beat each other up yeah yeah which you know in most cases people think is a bad thing but sometimes people don't think that's a bad thing and it's complicated yeah um and and you know I think then what it what it ends up being is let's make it let's make the AIS sort of be the way that we that humans aspire to be but that's a much more fuzzy complicated thing because it's like whose aspirations you know you pick some you know sacred book you pick some self be careful there yeah right um and and you know you end up with so but then in the end it's kind of like well maybe some group of people would agree this is how we want the AIS to generally behave you know we could invent to sort of AI Constitution that defines how we generally want the AIS to behave and that's that's probably not that's probably a sensible thing to do it also happens to be comparatively hard I think to come up with what you want to say there and you know one of the things that we get to Define in perhaps computational language but better than in prompts is kind of a sort of definition of what we want you know what we want the AIS to do so to speak and then we have to figure out how do we do that you know are we going to have a worldwide this is what we want the AIS to do probably not a very good idea you know if you have if if you have a sort of mono uh you know monocultured sort of AI World it becomes rather brittle I mean if something's if it's like yeah something wrong with the you know with the with the with the code so to speak there's something wrong with a legal code effectively for the AIS oops you know we just made the whole world follow this legal code you know it's going to blow everything up so you know the thing that has I'm curious how you feel about the fact that this started uh at least you know open AI as a open source non-profit that somehow flipped into a for-profit and then flipped from everybody should have access to this code to suddenly Sam and the team saying you know what this code is a little too dangerous for everybody to see so now nobody can see it except us do you think it should be open sourced and people should see this stuff and it should be more out in the open or do you think it's fine for it to be you know programmed in a small number of people have access to the source code I don't think it matters because I think that this this whole idea of llms is now kind of the genie is out of the bottle right you know one can make llms you know open AI did a great engineering job and they have a you know they they seem to have uh you know they've achieved a bunch of things other people haven't yet achieved um and that's uh you know from a business point of view that has lots of significance there's lots of timing and lots of you know what will ramp up how quickly and so on and uh you know I don't think that it's a question of um you know I don't think in the in the big picture I don't think that's an important thing honestly the the ability for you know Innovation is hard and you have to kind of have you know you have to have a certain I mean I I know in in you know in our own case you know I have a fairly small company that I've been running for 36 years now um I mean 800 people or something fell fairly small by by many standards um but uh you know and the fact that we are able to innovate and go on innovating is a consequence of the fact that we have a viable business model for the things that we do if we didn't have that if we just said oh we're going to give everything away and uh you know then okay how do we how do we feed the 800 people so to speak yeah exactly we have to have some some business model and I personally you know I have to say for myself I I prefer business models which have a directness where the people getting value are the people you know paying for the thing so to speak rather than this sort of more indirect models with advertising it gives it a better alignment of kind of what one's building with what with what customers actually want seeing what's happened here are you just going to build your own language model and compete against I mean it's I it's something where where you know obviously a company like ours is capable of doing stuff like that easily yeah soundtrack would be a no-brainer for you so okay the the I mean it's one of these things where I don't think you know there will be many of these things um and uh you know I think that I I mean the thing I was was saying is that I I don't think you know it's kind of this thing where if you say well let's let's pull everybody down so that sort of nobody has the the either the sort of the war chest or the um or kind of the the motivation to be a leader you know that's not really very good for the world if you want Innovation to happen you know you have to have a situation where where that you know where for example some you know organization can decide for itself what it's going to do up to a point because if if the whole world is going to vote on you know what should we do next well you can you can kind of you know it's it's it's very implausible that creative Innovation is going to happen in that situation so I think you know I I I I'm not uh again I think in the big picture it doesn't really matter what these particular details are but I think that the uh you know having having the ability and the runway to uh uh to actually learn the motivation to to kind of uh independently innovate is is kind of important and and I think that's that's borne out by the fact that you know uh a year ago we didn't have tragedy PT yeah and you know it was uh you know it's particular people who had to you know who are in a situation where they could do and were motivated to do the kind of innovation that was needed to create chat gbt yeah and it's a small number of people there's a couple hundred I guess got them to here so it's not a it's not like it took Google to do it and obviously Google has their own but it didn't take a Facebook Google sized effort it took a a relatively modest sized group of people to achieve this and they should get all the credit uh in the world as we wrap here uh I guess I have two questions at the end how close are we that's a possible question to answer I know but very interested in hearing your thoughts on it to AGI and you know if you had to put a year or set a betting line over under and then how do we how do you know like in your mind when will you know that we have something that is an AGI you know about that I you know last 50 years I've been paying attention to kind of what happens with computers and all this kind of thing and people saying when we can do X then we'll know we have true you know artificial intelligence uh you know I've personally built a few of those X's that people have said you know uh when we have this and then when you actually have it people say oh it's just a piece of engineering that's not true you know intelligence and so on but I I think that the thing I mean it's it's almost the thing that you'll when you'll know you have true human intelligence is when you basically have a copy of a human and you can always say oh well it doesn't have this attribute you know because it isn't mortal it can't think this way or because it doesn't have you know five fingers it can't do this you know the only way you'll have something which is just like a human is to have something really just like a human now I think that um uh the question of kind of when the um you know it's it's sort of an incremental thing it's it's kind of a a uh you know this was a big shock chat GPT people didn't expect sort of this level of of humanity so to speak in an automated system um I think that uh um I would say that well in in terms of you know what else do you want kind of thing you know you'll be you know there was the Turing test that Alan Turing made up in 1950 which I think pretty firmly you know as of 2023 we we can do that that one is done nobody knew when that was going to happen and and you know that was that was one of the last of the kind of standard this is a test for whether you have true artificial intelligence so you know when but we can ask questions like you know for the things for this set of people when we'll be will we be able to automate the main thing that they do it's worth understanding that when we say when we talk about automating things it's kind of like you know back in the day people would hand write this or that thing and then printing came along and there was just a standardized you know font for a and b and c and some people would say well it's much more efficient we can you know it's much more automated and some people will say well you know you kind of lost the human touch of the of the calligraphic um uh stuff and and the same will happen here you know they'll be planted you know it's like you know when I read a chat GPT written essay it's very perfect it's very kind of anodyne in some ways um and you know it doesn't have it's kind of like the the rug that was made by a machine rather than by a person with those little errors in it type thing and you know I think people will uh people will continue to say oh well if the rug doesn't have a little errors then it isn't really um you know uh you know AGR and automated General rug or something yeah uh who should uh when when these machines are uh you know the other thing is I I always thinking when you talked about how this surprised everybody it reminds me of in Boston Dynamics made that first robot that could kind of run and do flips and you're like I wasn't expecting that when do these two things come by the Boston Dynamics you know parkour a robot and Chachi BT uh and what is that gonna look like you know one of the things that so I mean I think some of the things about sort of uh being able to create geometrical kinds of things in a kind of large language model-ish way you know those things are very much coming some of those things already here um you know that's important for you know if you're making 3D objects you're doing animation you know those kinds of things that's that's very you know insipid that's very very you know very very close I would say um you know I think that the question of you know using machine learning to figure out how do you grasp you know how do you pick up you know how do you pick up a cell phone or something that's proved comparatively difficult my guess is that will be cracked but it has proved comparatively difficult um I think that uh the whole question of sort of um how robotics advances my own you know one of the things that's surprising about robotics is it's fairly non-general purpose like with computers the big thing that was the big sort of advance in that really made computers possible was the idea of universal computation the idea you could have a fixed piece of Hardware you put different programs into it and it would do different computations um that's you know that was an idea originally from 1920s and 1930s it sort of became real in the 1950s and so on and that's what made software possible that's what basically made computers useful a word processor a video gamer or an Excel spreadsheet could all be done on the same computer right exactly so for for robots that hasn't really been the case it's not the case that you can have a general purpose robotics system you know people uh and I think that's something I've even thought about how to do that um you know I think that's something that is conceivable um it is tricky because the physical world is is nasty to deal with relative to the informational world so to speak yeah but uh you know if that happened and I think it will eventually happen and by the way I should say at a molecular scale biology has solved that problem biology has basically made with these proteins we were talking about earlier biology is you know you just have a sequence of amino acids and it curls itself up and you know sometimes it can be muscles sometimes it can be a brain cell sometimes it can be you know the things the critical things in in those in those different kinds of biological devices so to speak so biology at a molecular scale has sort of solved the universal robotics problem but on a large scale we haven't solved it yet yeah probably we will and when that happens you know for example in terms of the oh my gosh what jobs are going to be automated you know another you know another set of chunks of the pie will you know the things that are being done now will sort of zero out and there'll be this sort of a new collection of things that become possible and that's you know something about manipulating the physical world which which hasn't yet been uh you know and then what will happen is you know the main thing that will happen is sort of manipulating the physical world will become a problem of software so to speak rather than a problem of how you put different you know uh sort of pieces on the on the hand of the robot and so on now you know that's going to be wild when you can say to this audit this sort of General robot uh take you know uh you know Jason's uh you know bags to his room and it's like okay bags I know what those are there's Jason I know who he is he's a guest and now I need to know what room he's in let me go query what room is in and I'm gonna carry them upstairs the chat GPT interface or the language model would actually be able to figure out what you meant and then you just did a physical specimen that could actually pick up bags and do this right you know the thing to understand about that and a little bit more generality is this this whole question about you know the the chat interface can take sort of your whole you know speech about what you want the robot to do then the question is how are you actually sure that robot's gonna do what you thought it was going to do because you just had this language thing and this is where you know one of the things that I've been excited about very recently is this is where you know our computational language initiative is really important because once you have that that you know you've got the thing you say in natural language if you can generate from that a piece of computational language that's something that is intended for humans to read and you know a few million people know how to read it now and probably a lot more will learn how to read it and they'll say oh yeah that's what I wanted you know there's two lines of computational language I can read them and yes you know yep that's what I wanted you know go do it now um without that it can be bit challenging you know you can watch the robot you can say no no don't do that don't pick up the um you know don't turn which doesn't mean his spouse uh or the kids like that's that's the baggage we're talking about yeah right right so I mean you know and and you can obviously we prompt it and so on but there are plenty of situations in which particularly when you're building off a bigger system and you when you want to do you know a whole collection of things where having this intermediate layer of the sort of precise computational notation is really important but yeah I think that um you know we can expect well one of the things that's also funky about something like robotics is that the world the the sort of the built environment that we have was built for humans so you know we have doors we can open with you know with hands that are at a certain height et cetera et cetera et cetera so you know there's sort of a certain pressure to have humanoid-like robots just because we built an environment that is suitable for humanoid robots now there are plenty of environments in the world you know which are but you know thrown up by the natural world that are quite unsuitable to humans don't tend to hang out yeah right yeah and where you know something quite different will be appropriate but but uh you know that that tends to make it you know that that's kind of the analog of you've got an llm and it's learning actual human language and it could learn all kinds of other things but it actually learned human language to sort of fit into the human world of the human linguistic World in that case um as opposed to sort of the the human built world so to speak it is crazy how fast this is moving and it's just great to have people like you working on it uh everybody can just check out uh wolf from alpha check out the plugin start playing with it and uh share whatever you're building on Twitter and uh really appreciate you taking the time are you Dr Wolfram should I be calling you now I feel like I should you know I've noticed here's a basic rule if one's doing business if if somebody calls me Professor that's really deadly doctor is sort of okay but you know for business it's mister I really appreciate you taking the time I know you're very busy especially at this moment in time when everybody's really excited about the work you've done and continue to do so thank you so much and we'll see you all next time bye bye foreign
Info
Channel: This Week in Startups
Views: 156,922
Rating: undefined out of 5
Keywords: startups, twist, jason calacanis, public markets, private markets, venture capital, investing, learn, tech, technology, tech reviews, product reviews, gadgets, big tech, monopoly, fraud, finance, angel investing
Id: F5tXWmCJ_wo
Channel Id: undefined
Length: 87min 6sec (5226 seconds)
Published: Fri Mar 31 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.