DarkHorse Podcast with Daniel Schmachtenberger & Bret Weinstein

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Submission statement: we all know who Brett Weinstein is, this was my first exposure to Daniel Schmachtenberger. In this conversation they intelligently tackle many of the roots of what they see as our civilizationโ€™s possibly terminal issues. I think anybody who can follow along with them will find this enlightening and stimulating. I also found it to be completely non political.

๐Ÿ‘๏ธŽ︎ 6 ๐Ÿ‘ค๏ธŽ︎ u/evoltap ๐Ÿ“…๏ธŽ︎ Feb 11 2021 ๐Ÿ—ซ︎ replies

Listening now, Daniel is as amazing. This is a little scary though.

๐Ÿ‘๏ธŽ︎ 5 ๐Ÿ‘ค๏ธŽ︎ u/bethhanke1 ๐Ÿ“…๏ธŽ︎ Feb 12 2021 ๐Ÿ—ซ︎ replies

Oh hell yeah. I'll have to watch this. Daniel Schmachtenberger is the man.

I cannot recommend this video on sense-making enough. It fits the theme of what this subreddit seems to be aiming for exactly.

๐Ÿ‘๏ธŽ︎ 5 ๐Ÿ‘ค๏ธŽ︎ u/Zendayas_Stillsuit ๐Ÿ“…๏ธŽ︎ Feb 11 2021 ๐Ÿ—ซ︎ replies

Bret, at 2h 37 min of the video:

To me, as a theoretician ... there are multiple hypotheses. One is, the virus escaped from a lab unmodified; another is that it was enhanced with gain-of-function research and then it escaped; another is that it was weaponized and then released. Each of them is a hypothesis, each of them makes predictions, and they are all testable.

Isn't it bullshit? Isn't the whole problem with convincingly establishing the origin of the virus precisely that these three, or more, hypotheses do not make testable falsifiable predictions?

I remember Bret drawing his diagram about the possible origins of the virus and assigning percentage points to each of the blocks. The whole thing looked like an exercise in gut feelings. Nowhere did Bret the theoretician โ€”ย as opposed to, say, Einstein the theoretician who predicted gravitational redshift of light or gravitational waves โ€” make any testable predictions about the origins of the virus.

๐Ÿ‘๏ธŽ︎ 2 ๐Ÿ‘ค๏ธŽ︎ u/azangru ๐Ÿ“…๏ธŽ︎ Feb 13 2021 ๐Ÿ—ซ︎ replies
Captions
hey folks welcome to the dark horse podcast i am very pleased today to be sitting with my good friend daniel schmochtenberger who is the founder of the consilience project there's a lot more that i could say about you daniel but i think we will leave it at that for now people can look up your bio if they want to do so i should probably start by telling people that when i say that you are my good friend i really mean that you and i are good friends although to be honest we haven't spent all that much time together this is one of those cases where you meet somebody somebody who has started from a very different place and you discover that you have all kinds of thought processes that have reached similar conclusions and uh every discussion is fascinating the more i learn about what you think the more i realize i've got a lot to learn from you and that there is uh essentially infinite ground to be covered so welcome daniel thank you brett it's really good to be here i feel the same way that um we were introduced by our friend jordan hall and uh we've never had a conversation where i didn't learn something and where i didn't appreciate the good faith way that you showed up when we had a disagreement to talk through which is always fun it is always fun and and i must say i did a little bit of poking around seeing recent interviews you'd done but i deliberately did not overly study for this my sense is that the audience will get a great deal out of uh hearing you and me go back and forth and finding out what we agree on where we disagree and maybe most tellingly there's a phenomenon in which anybody who has learned to think more or less independently tends to have their own language for things their own set of examples that they use to explain things that recur over and over again and so in order to have high quality conversations uh there is this this period in which you're effectively teaching the other person how you phrase things and seeing those things line up is great and in the rare case where they don't line up it's even better because you know there's something to be learned one way or the other so i'm hoping that will emerge here i'm looking forward to it all right good so let's start here when i say that you and i come from different starting points i i mean to imply something in particular you as i understand it were home schooled and as you have described it that was actually closer to what most people would probably refer to as unschooling somehow this did not mess up your motivational structures your parents were alert about what they were doing and so you ended up pursuing what you wanted to pursue and it did not you did not get in your own way and lo and behold you end up a wickedly smart independent first principles thinker now that's not my story at all my story is i went to school and it didn't work i had something that most people would call a learning disability and it got in the way of school functioning for me and more often than not i got dumb tracked and basically that complete failure of school to reach me accidentally worked like some kind of unschooling i would say and so there are maybe many paths i don't know but i'd be curious four people who have traveled some road to the land of high quality independent thought what can they expect the experience to be like when they arrived there the experience of high quality independent thought yes if you imagine that well it would be lovely to think independently and to do so well and the world is going to be a paradise if you start doing that because of course that's a very desirable thing to do and people will appreciate it you're going to be surprised that's not necessarily what happens when you arrive there and certain experiences show up all over the place and without telling you what my experiences might have been i'm curious as to what you might have encountered and whether or not those things will be similar ah yeah it's an interesting question uh i think you will experience most people will experience a higher degree of uncertainty than people that are part of any camp that has figured most things out and they can cognitively offload whoever the authorities or the general group consensus is um and certainty is certainly comfortable in a particular way and if you're really thinking well about what is my basis for what i believe what is the epistemic grounding what's the basis of my confidence margin and you really think about your confidence basis clearly as you try to find more information to deepen the topic the known unknowns expand faster like at least at a second order to what the uh known knowns do and so you keep being aware of more stuff that you don't know that you know is relevant and pertains to the situation so there's a complexification of thought there's an increase in uncertainty hopefully there's an emotional development and kind of a psychological development where there's an increased comfort with uncertainty so that you can actually be honest and not uh defect on your own sense making into premature certainty that creates confirmation bias and there's also a and i think that's why one of the reasons used term independent um there is a certain aloneness of not having a whole camp of people that think very similarly um i don't find that to be a painful thing right but it's a thing no it's actually in its own way uh it's freeing because the fact is if you follow logic to natural conclusions you'll end up saying a lot of things that uh are alarming or discordant with the conventional wisdom and the world neatly divides into people who will be so enraged or thrown by what you're saying that they uh disappear or maybe they become antagonists at a distance and the people who have a similar experience and therefore aren't thrown by the fact that you're saying things that are out of phase and so editing the world down to those who are comfortable with what they don't know who are interested in following things where they lead irrespective of who that uh elevates and and who it uh um hobbles those people are interesting people to hang out with and so yes the uh the alienation may be a blessing in disguise in some ways there's two other thoughts that came up as you're just talking one is i i wouldn't call myself an independent thinker i'm being particular about the semantic of the word independent i wouldn't call anyone an independent thinker because i think in words invented by other people i think in concepts invented and discovered by other people i don't necessarily have a specific school of orthodoxy from which i take an entire world view but almost every idea that i believe in i did not discover and so uh i think that's a very important concept because i think the the ideas we're going to discuss today regarding democracy and open society have to do with the relationship between an individual and a collective and i think the i there the idea of an individual is fundamentally a misnomer without everybody else i wouldn't be who i am and i wouldn't think the way i think i wouldn't think in the language i do i wouldn't have access to you know the knowledge that came from the hubble telescope and the large hadron collider and so many things like that um so i can say that there is a certain like ultimate authority that of what i choose to feel that i believe in and trust that has an internal locus of control but the information coming from without in my own internal processing of it are part of the same system so this is a perfect example of what i was suggesting up front where two people who do whatever we're going to call this will will have their own separate glossaries so if i can translate what you've just said i daniel schmucktenberger am not an independent thinker because such a thing is inconceivable in human form right and i totally agree with that the fact is not only are you interwoven with all sorts of humans who are responsible for conveying to you in one way or another conclusions that you couldn't possibly check you know these are thoughts that would be familiar to descartes for example um but you are also building from the entirety of cumulative human culture right all of the tools with which you can think are uh almost all of them are too ancient to even know where their rudiments originated so anyway i don't disagree with any of that so to me i would say there is such a thing as an independent thinker and in your schema it has to do with whether or not they are thinking a la carte that is to say using that set of tools that is most effective irrespective of the fact that those tools don't all come from one tradition um and you would say there is no independent thought because a la carte is the most you can do or something like that i might say that i like a term like interdependent better because it doesn't mean that there isn't an individual involved but it means that the individual without everyone else is also not a thing and so the recognition that sovereignty and individuality is a thing and it is conditioned by affected by and affecting other groups of people are both necessary in the hegelian synthesis to understand what is the nature of a human are they fundamentally individual and then groups emerge are they fundamentally tribal and they're formed by the tribe and it's very much both in a kind of recursive process between them 100 in fact i once wrote something i called the declaration of interdependence it was a sort of proto-game b attempt to define what the rules of a functional society would be like and i also frequently say that the individual is more an illusion than it is real and what i describe is that an individual is a level of understanding that evolution has focused us on because historically it has been sort of the limit of that where we might have some useful control right evolution might ultimately care about whether or not you are successful your genes are still around a hundred generations from now but your focus on a hundred generations from now is unlikely to have any useful impact whatsoever whereas your focus on your life and your children is likely to be useful so we have we have been delivered a kind of temporal myopia in order to keep us focused on that which could be productive for an ancestor but of course we are now living in an era in which we can have dramatic impacts on the distant future in fact you and i are both quite focused on the strong possibility that our foolishness in the present will result in the end of our lineage and that that is something that evolution were it capable of um upgrading would certainly have us concerned about because the hazard is very real and our tendency not to think about it is a big part of the danger i think temporal myopia and the collective action collective coordination problem is a good way to describe all of the problems we face or one of the generator functions of all the problems we face that you have a bunch of game theoretic situations where each agent within their own agency pursuing the choice that makes most rational sense to them pursues local optimums where the collective body of that drives local global minimums but if anyone tries to orient towards the longer term global maximum they just lose in the short term that's an arms race that's a tragedy of the commons and so how do we reorient the game theory outside of those multipolar traps i would say is one of our underlying questions that when the biggest harm we could cause was mediated by stone tools or bronze tools or iron tools or even industrial tools we didn't have to cause it immediately because the extent of harm was limited in scope when it is mediated by fully globalized exponential tech running up against planetary boundaries with many different kinds of catastrophe weapons held by many different agents we actually have to solve the problem yeah i of course agree completely with this as well that effectively maybe it really is every single important problem is a collective action problem of one kind or another we've got races to the bottom we've got tragedies of the commons we've got these things uh intermingled but once you start to see that on the one hand it is you could take from it a kind of reason for despair because these are not easy problems to solve on the other hand the discovery that effectively it's not a thousand distinct problems it's a thousand variations on one theme and that that theme is solvable in fact we have for example uh eleanor ostrom's work which points to the fact that evolution itself has solved these problems many times that that is hopeful so i don't know where you are in terms of how hopeful you find yourself about humanity's future but i'm quite certain that you and i will uh align on the idea that yes if we could focus on the problem as it was it's more tractable than many people think it is yeah i mean you mentioned hopefulness you mentioned a bunch of good things there that rather than a bunch of separate problems you have a few problems with lots of expressions this was a big chunk of the kind of work i engaged in and with a number of people uh you were part of uh looking at when we inventory across all of the catastrophic and existential risks ones involving ai and problems with biotech and other kind of exponential tech and environmental mediated issues and things that escalate to large-scale war is there anything in terms of the patterns of human behavior that they have in common and so this kind of race to the bottom collective coordination thing is one way of looking at that but there's a there's a few ways of looking at what we'd call the generator functions of catastrophic risk and it really is simplifying if you can say are there categorical solutions to those underlying generator functions they're hard right they're they're hard um now when you talk about hopefulness uh i notice that the way that i relate to the optimism pessimism thing is there's an optimism which is almost like a choice to say i'm going to have optimism that's that there is a solution space worth investigating even if i don't know what it is and if i'm wrong that's it's the right side to be wrong on as opposed to there was a solution i didn't look for it um and then i'm going to have pessimism about my own solution so i'm going to try to red team my solution so that i can find out how they're going to fail before finding out how they fail the hard way in the world but then not be devastated by the fact and that might that solution wasn't it and keep trying and i think that's kind of how the um committedly inventive innovative principle works so uh yeah again we've we could uh do almost a one-to-one mapping of your schema onto mine i do this in terms of prototyping rather than red teaming and discovering it's wrong it amounts to the same thing when you say actually it's hard you and i would have to define two different kinds of hard probably there is hard to make function to stabilize and there's hard to figure out what the solution is and those are distinct we might find elements of both of them here but let me just give a maybe it's um the canonical example of a solution to a game theory problem that uh everyone will recognize i divide you choose right it's the perfect solution to an obvious problem of choice and selfishness right if there is cake and i know that you're going to get your choice and that you are incentivized to pick the larger piece then i am incentivized to get the cut as even as possible and the point is it neutralizes the concern so we are looking for solutions of that nature now i don't think they are all that hard to understand in broad terms in general it may there may be a lot of work on the discovery end but when you see them they end up being surprisingly simple my biggest fear is that the there is it is very rare for people to understand how much danger we're in and why and therefore what solution we are inherently looking for and um how urgently we should be seeking it in other words as long as things function pretty well in the present and people get fed and housed it is very easy for them to ignore the evidence that we are in grave danger even if we are uh fat and happy and uh enjoying a you know a period of good luck yeah when you like one of the interesting things in the study of previous civilizations is that none of the great civilizations of the past still exist they all failed even if they had been around for hundreds or thousands of years and so to to understand that civilization's failing is the only thing that's ever happened and then recognize that since world war ii we have for the first time a fully globalized civilization where none of the countries can make their own stuff that the supply chains that are necessary to make the industrial and productive base are globalized and that we're running up against the failure points of a globalized civilization which it that's an important thing and what's so interesting is that all the previous civilizations that failed had so much hubris before their fall because there had been so many generations where they had succeeded that they had forgotten that failing was a thing it was just some ancient myth it didn't feel real so we don't have an intuition for things not working or for catastrophe because we haven't experienced it and our parents didn't experience it and it's only myth and as a result we just make bad choices and i mean this is where studying history and studying civilizational collapse is really helpful and you can see that even as the system starts to fail in partial ways um you know to me it seems very clear that when we look at the george floyd protest turning into riots over the summer that happened they were following the covid shutdowns and specifically all the unemployment from it when and whenever the unemployment goes up whenever the homelessness goes up when people can't when the society makes it to where people who are trying can't meet their basic needs then it gets a lot easier to recognize there's something wrong with the system as a whole and go against it but we also never had a point in human history where it was like no matter how outraged i am all i have to do is start scrolling for a second and i've forgotten everything not to mention the fact that i'm probably on opioids and benzos and so that that makes it to where the frog can keep boiling in hot water longer yeah so i i often say that people are too comforted by the idea that people are always predicting the end of the world and it hasn't happened yet because in fact it happens all the time right the ends of these civilizations but it's even worse than the analysis that you and i appear to agree on here because many of those civilizations that have ended in fact most of them the civilization the organizational structure ended but the people didn't right so the romans continued on as other things the maya are still with us right they are not with us as the maya and the point is actually in this case the jeopardy that we are creating is to our very capacity to continue as a species not just to our ability to continue with the structures that we have built so not only are we all in it together this time but we're all in it in a way that we never have been before or at least very rarely have been before and that really ought to have people's attention but you're right the capacity to distract ourselves from it has never been better either i think something that i find particularly important when thinking about catastrophic risk now relative to previous examples of civilization collapse is that until world war ii we couldn't destroy everything like we just didn't have enough technological power for catastrophe weapons um and so you could fight the biggest bloodiest war violate all of the rules of war and it would still be a local phenomena and with the invention of the bomb we had now the new technological power to actually destroy habitability of the planet kind of writ large or at least enough of it that it was a major catastrophic risk and on the time scales that you think about as an evolutionary biologist of how long humans have been here and the you know proto-humans since world war ii is no time at all to have really adapted to understanding what the significance of that is and the only reason we made it through was because we created an entire global order to make sure we never used that new piece of tech and in all of history we always use the arms that we developed and so we made this whole bretton woods world and mutually assured destruction that said okay well let's have so much economic growth so rapidly that nobody has to fight each other and they can all have more because the pie keeps getting bigger but that starts running up against planetary boundaries and interconnecting the world so much it gets so fragile that you know a virus and wuhan shuts the whole world down because of supply chain you know interconnected supply chain issues so that thing can't run forever and the mutually assured destruction was one catastrophe weapon and two superpowers so mutually assured destruction works the game theory of it works well as soon as you start to add to that the bio weapons and the chemical weapons the the fact that bioweapons can be made very very cheaply now with crisper gene drives and things like that grown weapons we have dozens of catastrophe weapons held by many dozens of actors including non-state actors and that just keeps getting easier mutually assured destruction can't be put on that situation it doesn't actually have a stable equilibria so now we have to say how do we deal with many many actors having many types of catastrophe weapons that can't have a forced game theoretic solution with a history where we always used our power destructively at a certain point how do we deal with that it's novel right like we have no precedent for that yeah it's absolutely novel i mean when i became cognizant you know let's say 1975 is where i first started having you know coherent thoughts about the world that was only 25 years after the end of world war ii and it seemed like world war ii was a very long time ago but of course we've covered that distance twice since then so the ability for you know that the tools with which for us to self-destruct as a result of aggression are brand new and you're absolutely right the thing that caused us from using them or prevented us from using them that force disappeared it no longer exists there's no steel stable equilibrium here so what's protecting us is not well understood at best and then add to that all of the various industrial technologies that we are now using at a scale where they imperil us and i don't know about you but i keep having the experience of a catastrophe happens and that's the point that i get alerted to some process that is very dangerous to humanity that i didn't know about until the catastrophe right this has happened with the financial collapse of 2008. it happened with the triple meltdown at fukushima it happened at alizo canyon i believe it has now happened with covet 19 and gain a function research and the point is it paints a very clear picture we do things because we can't see why we shouldn't or this is also a game theory problem those who can see why we shouldn't don't and certain number of people don't see why we shouldn't and they do and we all suffer the consequence of their myopia and so on multiple fronts we are playing you know we are rolling the dice year after year and the people who can think independently looking at that picture looking at the series of accidents looking at the hazard of something like a large-scale nuclear exchange without an equilibrium to prevent it those people wake up but the problem is the mechanism to actually uh begin to steer the ship in a reasonable direction in light of these things doesn't seem to exist for reasons i've i've heard you explore many places so what does it mean as far as you can tell there's one thing that you said that i think is worth us addressing first is that some of the things that caused the catastrophe either were unknown or those who knew them were gained theoretically less advantage than those who were oriented on the opportunity rather than the risk because those who orient towards opportunity usually end up amassing more resource that equals more ability to keep moving stuff through there is a article and a conversation in the less wrong community about um regarding catastrophic risk mistake theory versus conflict theory what percentage of the of the issues come from like known stuff that we knew would cause a problem or at least could cause a problem and game theoretically we went ahead with it anyways versus stuff where we just couldn't have anticipated or really didn't anticipate and i think it's fair to say these are both issues right there's there's true mistake theory stuff like we just couldn't calculate and then there's true conflict theory stuff we knew that escalating this military capacity would drive an arms race where the other people would that if we calculated it there's an exponentiation on all arms races that takes us to a very bad long-term global situation one of the insights that i think is really interesting is that the the fact that the mistake theory is a thing and everyone acknowledges it ends up being a cover a source of plausible deniability for what's really conflictary so we we know there is an issue we pretend not to know we do a job of due diligence and risk analysis and then afterwards say it was a failure of imagination and we couldn't have possibly known i have actually been asked by companies and organizations to do risk analyses for them where they did not want me to actually show them the real risk they wanted me to check a box so they could say they did risk analysis so they could pursue the opportunity and when i started to show them the real risk they're like we don't want to know about that and so when it comes to the could we have possibly factored that so i mean a classic example um i like to give because it's so obvious in retrospect is could we have known in the development of the internal combustion engine that making street cars which seemed like a very useful upgrade of having the horse internalized to the carriage would end up causing climate change and the petrodollar and wars over oil and oil spills and mass and ocean oil spills whatever it seems like that would have been hard to know a hundred years in the future that it would do all that stuff um and this is a classic example of also where we solve a problem and end up causing a worse problem down the road in the nature of how we do it which you can't keep doing forever the story is oh we cause a worse problem then that's the new market opportunity to solve that problem in the ongoing story of human innovation but when you start running up against it the problems are actually hitting catastrophic points you don't get to keep doing that ongoingly you don't get to regulate after the fact the way that we always have once a problem hits the world things that are catastrophic um could we have known well yeah in london before that one there were already electric motors and two people were already getting sick of burning coal from uh lung disease from the burning of the hydrocarbons if we had tried to do good risk analysis could we have yeah but there's so much more incentive on who pursues the opportunity first and then there's this multipolar trap of well let's say we don't the other guy is going to so it's going to get there anyway so we might as well be the ones to do it first and that thing ends up getting us all the time which is why collective action again comes in well it's really interesting how much of this is again parallel heather and i use the example of somebody you know driving the first internal combustion engine and somebody chasing them down the street saying don't do that you'll screw up the atmosphere right how crazy is that person running down the street saying that because you know you would have to scale it up to such a degree before that's even a concern that that person seems like a nervous nelly but of course they would also have been prophetic but the other thing i want to ask you about is you say that we have these two categories where um sometimes we could have known and we went we we knew in fact and we went ahead anyway and then in other cases um [Music] we didn't know and something snuck up on us and i want to be i want to clarify what you just said because my understanding here is that if you dig deep enough somebody always knew right in general there's some mechanism whereby the person who correctly predicted what was going to happen has been silenced often they lose their jobs they disappear from polite society at the point they turn out to be right their reputations are never resurrected as far as i can tell so am i wrong that even in the cases where people who made the decisions may plausibly have not known that the reason they didn't know is because there's some sort of active process that when there's a process a profit to be made shuts down anything that could uh be the basis of a logical argument that we shouldn't do it i don't know that i'll say always i'll i'll certainly say most of the time um and let's say there was a case where we like really nobody knew it's usual my guess is we probably could have had we tried harder and then let's say there's going to be some unpredictable stuff like we we know in complex situations it's going to be unpredictable stuff so you do the best job you can to forecast the risks and harms ahead of time but then you also have to be ongoingly monitoring well what would the early indicators that there's a problem be and how do we take it when we find that there's something we hadn't anticipated how do we factor that into a change of design well once once the profit stream is going and the change of design up the profit stream how does the recognition of that there's a problem actually get implemented when those who have the choice implementation power are not the people who are paying attention to those indices so yes i would say and it's easy to just say hey yeah there was some there was some whack-a-doodle who was saying that there was going to be some rest but there's always some wacky doodles thing there's going to be risk about every new tech and if we really listen to all them we'd have no progress that's the story right it's a story could we now let's there's but there's a collective coordination issue because it's it is fair to say so like let's take um ai weapons right now specifically uh automated drone weapons there is an arms race happening on automated drone weapons and i think every general and military strategist knows that all of our chants of dying from ai weapons goes up their kids everybody's as we perc as we progress in that technology it's a bad technology it shouldn't exist we should create an international moratorium that says nobody builds ai drone weapons that we don't want automated weapons with high intelligence out there but we can't make that moratorium because if one country doesn't agree if one non-country some non-state actor doesn't agree that has the technology or let's say everybody agrees how do we know they're not lying and developing it as an underground black project so either we don't even make the agreement or we make the agreement knowing we're gonna lie defect in a black project spy on their black project and try to lie to their spies who are spying on us and so it's like how do you get around that thing where if anyone does the bad thing in the near term it confers so much game theoretic advantage that anyone who doesn't do it loses in the long term why it was that the peaceful cultures all got slaughtered by the warring cultures and so what ends up making it through is those who end up being effective enough at war that's an underlying thing we have to shift because that has as its eventual attractor space self-destruction in a finite space yeah i totally agree and the i think fascinating thing when you interact with the incarnate aspect of the process you just described is that the people who are telling the lies that explain why we're doing something that we know is reckless often don't know that that's what they're doing right they actually believe their own press and instead of saying well yes this is terrible but we don't really have a choice or somehow indicating that they know that what you encounter is a true believer who thinks that this is safe and that's very frightening because it means that the mechanism at the point something begins to go awry to do anything about it doesn't exist right or at least it's not connected to the part that you can talk to and um so again not not too surprised to find um overlap in our map i would say the process that you describe of by the time you discover what the hazard is that there's a profit that has uh accelerated the process i call this the senescence of civilization because it's actually exactly a mirror for the process that causes a body to senesce the evolution of senescence involves processes that have two aspects one which benefits you when you're young and another which harms you when you're old and because many individuals don't live long enough to experience the harms in old age they get away with it from evolution's point of view and evolution favors the trait in spite of the late life harm so those late life harms accumulate and that's the reason that we grow feeble with age and die and that's an exact mirror for the way we've set up our economic and political system where any process that is profitable in the short term at the res at the consequence of having some dire implication for civilization later on those processes are so ingrained by the time we discover what the harm looks like in its full form there's nothing we can do to stop them okay so let's use two really important current examples so let's take facebook and social media and the way they've affected the information commons and the epistemic commons are at large so we know that the nature of the algorithms optimizing for time on site while being able to factor what i pay attention to the whole tristan hair story makes almost it very few people wake up and say i'd like to spend six hours on facebook and so i'm gonna spend more time on facebook if i kind of leave executive function rational brain and get into some kind of limbicly hijacked state where i forget that i don't want to spend my whole day on facebook and so time on site maximization appeals to existing bias and appeals to limbic hijacks so if i piss off and scare the whole and and elicit sexual desire and whatever of the whole population while doubling down on their bias and creating stronger in-group identities associated without group identities the algorithms optimize well it is an ai of the power that beats cast private chess beating us at the nature of the control of our attention so we can see that the right got more right the left got more left the conspiracy theories got wackier the anti-conspiracy theory people became more uh upset at the idea that a conspiracy could ever exist basically everybody's bias doubles down and they all move apart from each other faster well society doesn't get to keep working that that is a democracy killer right that's an open society killer there's a reason china controlled its internet is if you don't want your society to die you have to be able to have some shared basis of thought so we can say and the story is oh we didn't know that was going to happen well you go back and look and guys like jaren lanier were at the very beginning of facebook and google whatever saying hey guys this ad model is going to everything up like you can't do the ad model thing you got to have paid for subscription or you know some other kind of thing and i was like shut up dude um and or just don't even engage in the conversation and then get to say afterwards failure of imagination but now how do you regulate it when those corporations are more powerful than countries because some the regulation is going to be happen in a court where the lobbyists have to be paid for by somebody right so who are the lobbyists paid for by and it has to be supported by popular attention and those who can control everybody's attention can also affect what is in popular attention so this is a very real example where we know the harms were known and and it actually got large enough that it killed the regulatory apparatuses capacity absolutely um in fact again this is going to be another alignment of our maps so what i've been playing with is the idea that we are incorrect in imagining that people necessarily want their uh their expectations flattered that people actually may like to be challenged but that it's inconsistent with the well-being of advertisers that the very fact is because advertising is only a tiny fraction informative and is mostly manipulative you have to be in your unconscious autopilot phase in order for it to cause you to buy a car you wouldn't have otherwise bought or buy different deodorant than you would otherwise buy and so the point is in order for us to be the thing gets paid for by advertising in order to be useful to advertisers we have to be unconscious and the only way to keep us unconscious is not to challenge us basically to tell us what we think we already know rather than what we need to know and so they're lulling us into this even though we would still be interested in the platforms if we weren't being advertised to but we would be interested in having more important conversations there which is really in some sense what the the growth of heterodox podcast space is about oh my goodness okay there's two directions i want to go at the same time i'll just pick one there's a reward circuit on exercise and there's a reward circuit on junk food right and they both have a dopaminergic element and reward circuit but of a very different kind and the reward circuit on exercise is that it actually feels like at first and it's hard and but your baseline uh of happiness measured in whatever dopamine opioid access whatever gets better over time and then you start actually feeling over time but not quickly this is another place where um temporal myopia ends up mattering because there's a delayed causation on the healthy one and no delayed causation on the unhealthy one so i start getting the reward circuit on exercise when i start seeing results and then i want to push hard and then i'm willing to actually go against entropy and put energy into the system so the energy grows whereas the chocolate cake i get a reward instantly and i don't have to apply any energy but as i do it my baseline gets worse and this is the like addiction versus health reward circuit direction and the same is true for scrolling facebook compared to reading an educational book at the end of a month of reading the educational books my life feels better i feel more proud of myself at the end of a month of scrolling facebook i'm like what the am i doing with my life and and yet that one will keep winning for the same reason that 70 of our population is overweight and over a third of them obese and so but not but my only hope is that not everyone who has access to too many calories is obese right like there are some people who figured out hey that's a reward circuit i don't want to do and i'm going to exercise and i'm going to not eat all of the fat sugar salt that evolution program me to have a dopamine hit for because it's a shitty direction now we need to get that number of people who actually have taken some sovereignty over their fitness and well-being in the presence of the cheaper reward circuit we need to get that number up to everybody because right now obviously overweight is one of the main causes of death in the developed world but we have to then apply that to the even more pernicious hypernormal stimuli because salt fat sugar or hypernormal stimuli on in the augustatory system right we have to apply that to the sensory system that's coming in through things like social media and that means less social media less entertainment more study and it doesn't have as fast a reward circuit it just doesn't but it has a much better longer term reward circuit where your baseline goes up and this is where enough mindfulness and enough discipline have to come in because otherwise the orientation of the system is that it's more profitable for corporations for me to be addicted because you maximize lifetime value of a customer through addiction and it's an asymmetric war because they're a billionaire trillion dollar company and i'm me so how do i win in that asymmetric war where it's in their profit incentive whether it's mcdonald's or facebook or fox for me to be maximally addicted i have to recognize holy right like i actually have no sovereignty even if i claim to live in a democracy against hypocrisy who want to control and manipulate my behavior in a way that is net negative for me holistically while having the plausible deniability that i'm choosing it because they're coercing my choice so i have to get afraid of that enough that i mount a rebellion right a revolutionary war in myself against those who want to drive my hyper normal stimulus reward circuit so the whole how can everybody become more immune to the shitty reward circuits and notice them and become immune to them and how can they become more oriented to the healthy reward circuits that's another way of talking about what we have to do writ large yeah that's beautiful i completely agree um in fact it dovetails with another thought that uh first time i thought it i thought it was original and then having said it i discovered lots and lots of people had said it before me um that there's a very close relationship between wisdom and delayed gratification that it's the ability to bypass the short-term reward circuit in order to get to something uh deeper and better um that is you know that is what wisdom is about but you didn't include on your list what i consider to be maybe the one of the most important instances of the failure that you're talking about which is sex there's a very direct comparison at least for either males who are wired uh in a normal fashion for a straight guy or women who are toying with that same programming which i believe there are many but the comparison between casual sex which is certainly we are as males wired to find that a very appealing notion because it's such a bargain if you can produce a baby where you're not expected to contribute to its raising that's a that's a huge evolutionary win um and then you have to compare that to the rewards of a deep romantic lasting relationship with commitment and the problem is that the deep lasting relationship stuff has a hard time winning out over the instant gratification thing if the instant gratification thing is at all common and uh so that's really screwing up people's circuitry with respect to interpersonal relationships and bonding and i have a sense that it is also in a way that's much harder to demonstrate contributing to the derangement of civilization that people many fewer people have a relationship you know it's not like uh marriage is easy right it's not it's super complex but having somebody who you can fully trust somebody who you've effectively you know fused your identity with to the level that they share your interests and uh you know they may be the only person who'll tell you what you need to know at some points and the fact that many people are missing that i think is uh deeply unhealthy yeah so i would say that market type dynamics benefit from exploiting the shitty reward circuits across every evolved reward circuit axis and so from an evolutionary point of view survive and mate are the things that make your genes get through primarily so we mentioned the survive the calorie one early earlier right so in an evolutionary environment i could get plenty of green leafy things in many environments um it was very hard to get enough fat enough sugar and enough salt those were evolutionarily rare things so more dopaminergic hits on those so fast food ended up figuring out how to just combine fat salt sugar with no other nutrients with maximized uh ease of palatability and textures and there's like a scientific optimization of all of the dopamine hit with none of the nutrients so you can actually be obese and dying of starvation right and what that is to nutrition where you would should have a natural dopaminergic kid on something that has nutrients built in for you know adaptive purposes is what porn and online dating is to intimate relationship is what facebook and instagram is to tribal bonding is how do we take the hyper normal stimuli part of it out separate it from the nutrients and make a maximally fast addictive hit that actually has none of what requires energetic process yeah i've called this the junkification of everything and it is directly an allusion to junk food where we can most easily see this but the idea is you will be given a distilled so if i can rephrase what you said in terms that are more native to me when you are wired to seek you know the umami taste that tends to be very tightly correlated with meat you will tend to get a lot of nutrients along with it in in the ancestral context in the modern context we can figure out how to just trip the circuit that tells you to be rewarded and it's no longer a proxy for the things that it was supposed to lead you to and as you just said you can now look at that across every domain where you have these dopamine rewards and understand why people are you know living in the world that pinker correctly identifies we are living in where we have just a huge abundance and yet are so darn unhealthy certainly unsatisfied right it explains that that paradox of being better off in many ways than any ancestor could have hoped to be and yet being effectively ill across every domain yeah i will say something about this that's important i mean briefly the fact that life expectancy started going down in the last five years in the us and certain parts of the developed world is really important to pay attention to but the deeper point i want to make is the hubsyan view on the past i think is one of those like mistake theory information theory things um i mean mistake theory conflict theory i think the dialectic of progress is such a compelling idea and we're oriented to the opportunity and not the risk in the same way we don't want to look at the risk moving forward that would have us avoid an opportunity we don't want to look at good things in the past and we don't want to look at good things of cultures that we want to obliterate so we want to call the native american savages so that we can of course emancipate them uh historically and we want this hobbesian view that people had brutish nasty mean short-lived lives in the past so that we don't have to face the fact that advanced civilizations failed and that is what our own future most likely portends i think um i think that is a convenient wrong belief system in a similar way well i hope you don't hear me doing that i certainly don't you don't i just i had to say it you have to say it so it's clear to to our our listeners well i appreciate you doing that um i did want to go back to a couple things you said and you know of course this happens every time you and i talk where every every thread you know takes on multiple possible directions we could go and there's no way to cover them all but in any case you pointed to survival and mating being the primary mechanisms to get your genes into the future and i want to point out that this is one of these places where our wiring which is biased in the direction of those places where our ancestors had agency that was meaningful up upends us and in fact this is something i think you and i are struggling against as we try to compel people of the kind of danger we're in and the necessity to upgrade our system you know before we run into a catastrophe too big to come back from and so in any case within your population survival and mating makes sense as an obsession but probably the biggest factor in whether or not your genes are around 100 generations from now is whether the population that you were a part of persists and so you know my field has done a terrible job with this we have gotten pretty good at thinking about individual level adaptation and fitness and you know when i say lineage people still don't know what i'm talking about and they're i'm confused about why i'm focused on it and my sense is it's like two components to an equation and you know you're either aware of the lineage thing but you misunderstand it as group selection or you're not aware of the lineage thing and you think group selection is a fiction and it's all about individuals and you know both of those are ways to misunderstand um the point i'm so happy to hear you saying this because uh i'm sure this is a conversation i would love to go deeper and understand the the distinctions between lineage and group selection the way that you see them but if i just even take the concept of group selection as opposed to just individual selection and take a species like sapiens and say there was no such thing as an individual that got selected for that was not an effective part of a group of people and the tribe the band was the thing that was being selected for so there was a fundamentally kind of pro-social behavior that were requisite um but then we get bigger than the dunbar number only like yesterday evolutionarily and that whole the whole evolutionary dynamics break because that pro-social behavior only worked up to that scale when everybody could see everybody and knew everything right like there's when we start looking at how do we solve collective action problems you start realizing well how do if we make some agreement field as to how nobody does the near-term game theoretically advantageous relative to each other long-term bad thing there has to be transparency mechanisms to notice it so the beginning of defecting on the whole defecting on the law the agreement field the morals is the ability to hide stuff and get away with it well you can't hide stuff in a tiny tribe very well even even if you can do it once twice 10 times sooner or later if hyde is your instinct you'll be revealed and the cost will exceed what you've built up by uh by pulling it off however many times you've done it and so there's a forced transparency in that smallness of evolutionary scale and when you start to get up to a large scale and now there have to be systems where everybody isn't seeing everyone and i'm smart enough i can figure out how to play it and the whole while pretending that i'm not hiding the accounting of it and getting ahead that's the evolutionary niche for corruption for parasitic behavior so one way i would describe and as you've described down here before if there's a niche with some energy in it it's going to get exploited right um we have to rigorously close the evolutionary niches for human parasitic behavior humans parasitizing other humans and the first part of that is a kind of forced transparency that if someone were to engage in that it has to be known and now the question is that's all the versions of that we've explored at scale look like dreadful surveillance states so how do you make something that doesn't look like a dreadful surveillance state that also doesn't leave evolutionary niches for parasitic behavior that ends up rewarding and incenting sociopathy absolutely so a bunch of different threads one the eleanor ostrom work is important because it does point to the fact that you can scale these mechanisms up in fact selection has scaled up these enforcement mechanisms beyond a tiny number of people who know each other intimately now it hasn't scaled them way up but it's proof of concept in terms of the ability to get there and it's a model of what these systems might look like the other thing though your your focus on corruption i think is absolutely right and one way to just detect how stark the difference is is the recognition of how many times in an average day you encounter right in other words how many advertisements do you encounter in an average precovet day let's say right these are all cases where somebody you don't know or almost all of them are cases where somebody you don't know is attempting to manipulate you into spending resources differently than you would otherwise spend them so this is an overwhelmingly dishonest interaction with your world and there would have been some dishonesty for an ancient ancestor you know obviously there are uh creatures that attempt to look like what they are not but in general one could see the world as it was and the deception was the exception not the rule and in some sense uh we live in a sea of right and we're so used to it that we don't even recognize that that's abnormal that it is the um the result of a gigantic niche that has opened up as a simple matter of scale as you point out uh and that um restoring a state where you can actually trust your senses you can by and large trust the people who you're interacting with to inform you rather than lie to you um would be a huge step towards uh reasonability oh i really hope that we follow all the threads here because this is getting so close to the heart of what we have to do as scale increases the potential for asymmetry increases and as the asymmetry increases the asymmetric warfare gets more difficult to deal with so let's think about this in terms of market theory let's think about an early hypothetical idealized market like literally people just took their shoes and their cows and their sheep and their service offerings to a market and they looked at exchanging them and then because trading cows for chickens is hard we have some kind of currency to mediate a barter of goods and services but we're talking about real goods and services maybe there's two people maybe there's five people that sell shoes there's not five thousand of them and i can go touch the shoes myself i can talk with them i can see what the price is and there is no hyper normal stimuli of advertising it's like somebody yelling from his thing so there's a symmetry between the supply and the demand side right the supply side is a guy or a few guys selling something and the demand side is a person or a family trying to buy something and they can kind of tell each other's to some degree of symmetry buy or beware becomes an important idea but now when this becomes nike and then still one person there's still a symmetry between supply and demand in aggregate meaning the total amount of money flowing into supply equals the total amount flowing out of demand but this side is coordinated and this side isn't you don't have something like a labor union on all purchasers where it's like all facebook users are part of some union that puts money together to counter facebook and lobbying and regulation you have facebook as like up close to a trillion dollar organization against me as a person and i'm still the same size person that i was in those early market examples but there wasn't like a trillion dollar organization and now when that happens manufactured demand kills market theory and classical market theory which is the the idea of why a market is like evolution right it's like some evolutionary process is that the demand is based on real people wanting things that will actually enhance the quality of their life and so that creates an evolutionary niche for people to provide supply and then the rational actor will purchase the good or service at the best price and of the best value but of course as soon as we get to a situation where and you look at dan orioli and all the behavioral economics saying the the homo economicus the rational actor doesn't exist we end up making choices based on status that's conferred with a brand based on uh the compellingness of the marketing based on all kinds of things that are not the best product or service of the best price but you also get that i want stuff that will not increase the quality of my life i desperately want because i was there the demand was manufactured into me so it's not an emergent authentic demand that represents collective intelligence it's a supply side saying i want to get them to want more of my and i actually have the power to do that using applied psychology like actual and as soon as you get to split testing and the ability to ai split test a gazillion things we're talking about radically scientifically optimized psychological manipulation for the supply side to create artificial demand and then be able to fulfill it and and most of that ends up being of the type that is actually bad for the quality of the life of the people but you have the plausible deniability they're choosing it hey i don't want to be patriarchal and control what they're doing the people are choosing it i'm just offering the source of supply that they're wanting that's like offering crack to kids and then when they come back for more of it like saying hey so this is that was one of the threads i wanted to address well i love it um back in uh must have been 2013 when game b was actually a group of people who met in a room and talked about things one of the points that i um was making in that context was this inherent asymmetry around unionization and that the problem is unions have gotten a bad rap because of the tight association cognitively that we have with labor unions right we think of unions and labor unions as synonymous but union is actually a category it's potentially a very large category and effectively management always has the benefit of it the question is will workers have a symmetrical entity right that's the labor case but you can make the same case with respect to you know banking credit unions don't work they're very bank-like but if they were structured in such a way to actually you know unionize people who utilize the bank could be highly effective could be a complete replacement for the insurance industry which doesn't even make sense in a market context but as a risk pool you could do a very effective job so anyway yes the the question is how do you scale up the uh collective force and especially how do you do it in light of the fact that the entities that are already effectively unionized see it coming and they disrupt it with all of their very powerful tools and so um well anyway go ahead i want to say the beginning of an answer to that because i think it brings us to what you've been largely exploring in the show of late of the breakdown of democracy and open society and what do we do about that and how that relates to breakdown in culture and breakdown and market we can look at those the relationship between those three types of entities so a way of thinking about what the architectural idea of a liberal democracy is and why say the founders of this country set it up not as a pure laissez-faire market but as a as a state that had regulatory power and the market together was the idea is that a market will provision lots of goods and services better than a centralized government will so let's leave the market to do the kind of provisioning of resource and innovation that it does well but the market will also do a couple really bad things it will lead to increasing asymmetries of wealth inexorably this is what piketty's data showed and but it's just obvious having more money increases your capacity to have access to financial services and you know you you make interest on debt and on um compounding interest on wealth and so you end up getting a power law distribution of wealth so then a few people in just the market dynamic would be able to have way outsized control over everyone else against everyone else's interests and the market creates opportunities for things that are really bad we all know that like we want there to be a thing called crime where you don't where even though there's a market incentive for child sex trafficking and whatever else we say no we're we're going to create some rule of law that binds that thing and not just have market drive it so the idea is that we create a state that we actually give a monopoly of violence to so it has even more power at the bottom of the stack of what power is than the top of the economic power law distribution so the wealthiest people and the wealthiest corporations will still be bound by this rule of law and the rule of law is an encoding of the collective ethics of the people right the ethics are the basis of jurisprudence and there is some kind of democratic process of getting to say what is it that we consider the good life and important that we want enshrined in rule of law we give that a monopoly of violence and really then the goal of the state is to bind the predatory aspects of market incentive while leaving the market to do the things that it does well but pretty much every law is where someone has an incentive to do something which is a market type dynamic that is bad for the whole enough that we make a lot to bind it okay so the purpose of a state is to bind the predatory aspects of a market that only works as long as the people bind the state and the people behind the state if you have a government of for and by the people of an educated populist who are who had a quality of education that were capable of understanding all the issues upon which we are governing and making law and a fourth estate where the news that they are getting is of adequate quality and unbiased enough that they're informed about what's currently happening for if you think about that that's what a republic would require and you realize that both public education and the fourth estate have eroded so badly for so long it's not that we're close to losing our democracy it it's dead we don't have a republic we have a permanent political class in a permanent economic lobbying class and the people who aren't really actively engaged in government in any way at all beyond maybe jury duty now and again if they can't get out of it and if the people to be engaged in government in any meaningful way had to tell the doe what they think should be done about grid security and energy policy or tell the dod what should be done about nuclear first strike policy or tell the fed and treasury what they think about interest rates they don't they have no idea how to have a governance of formed by the people they don't have that education they don't have uh the media basis so if the culture if the people can't check the state then the state will end up getting captured by the market and so you'll end up having the head of the fda be someone who ran a big drug or a big ag company and the head of the dod being somebody who ran lockheed or some military industrial complex manufacturer you'll have just lobbying just straightforward lobbying gets paid for by somebody who's they get paid for those who have the money to pay for lots of lobbyists and so then you end up getting a crony capitalist structure which is worse than just an evil market because now it has the regulatory apparatus of rule of law and monopoly of violence backing up the market type the dynamics so then we say okay well what do we do here and we see that civilizations fail towards either oppression or chaos right those are the two failed states they fail towards oppression if trying to create some coherence happens through a top-down forcing function they fail towards chaos if not having enough top-down forcing function everybody kind of believes whatever they want but they have no unifying basis for belief and so then they will end up going into they'll they'll balkanize they'll tribalize and then the tribal groups will fight against each other if you don't want to so and so either we keep failing towards chaos which we can see is happening in the west and in the us in particular right now and then china which is happy to do the oppression thing and oppression beats chaos and war right because it has more ability to execute effectively which is why china has built high-speed trains all around the world when we haven't built a single one in our country um so either we lose to china in the 21st century and oppression runs the 21st century or we beat china at being china meaning beated an oppression or it's like those are both failure modes what is the what is there other than oppression or um or chaos is order that is emergent not imposed which requires a culture of people who can all make sense of the world on their own and communicate effectively to have shared sense making as a basis for shared choice making the idea of an open society is that some huge number of people can all make choices together a huge number of people who see the world differently and are anonymous to each other not a tribe that was an enlightenment era idea right born out of the idea that we could all make sense of the world together born out of the philosophy of science and the hegelian dialectic that we could make sense of base reality and that we could make sense of each other's perspective dialectic find a synthesis and then be able to have that be the basis of governance so what what i think is this is not an adequate long-term structure because we can talk about why tech has made nation-state democracies obsolete and it's just not obvious yet but it has but as an intermediate structure the reboot of the thing that was intended has to start at the level of the people at culture and that collective sense making and collective good faith dialogue because without that you can't find state without that you can't find market incentive okay i love this riff of yours okay i think there's a tremendous amount that's really important and the synthesis is super tight i know people will have a little bit of trouble following it but i i actually would advise them to maybe to go back through it and listen to it again because it's right on the money as far as i'm concerned there's one place where i wonder if it doesn't have two things inverted so you talk about the two characteristics that are necessary in order for uh what did you call it liberal democracy or whatever it was that you used as a moniker to function one of them had to do with the idea that the state was big enough to bind the most powerful and well-resourced actors and the second was that the people have to be capable of binding the state now i understood you to say that what failed first was the people's ability to bind the state is that correct i'm saying that's at the foundation of the stack that we have to address the failure with recursive so as i see it what happened was the power the fact that there is always corruption it's impossible to drive it out completely the corruption self enlarges the loopholes and becomes subtle enough that it's hard to see directly the most powerful actors suddenly got an infusion of power and we could trace down the cause of it but let's just say somewhere in recent history the most powerful actors became more powerful than the state and what they did with that power was they unhooked the ability of the state to regulate the market i believe the reason for this was that each individual industry had an interest in having its regulations removed in order to create a bigger slice of the pie for it and so effectively what you had was each industry agreeing to unregulate every other industry like you can unregulate if i have a pharmaceutical company right and you're an oil company and you want to make money but you have to be able to up the atmosphere to do it and i want to make money giving people drugs that they shouldn't have um and you know corrupting the the fda then we'll partner and see what you got was many industries partnering to unhook the ability of the state to bind the market but one of the things that they had to do in order to make that work was they had to eliminate the ability of the people to veto right and so this is where we get this incredibly toxic duopoly that pretends to do our bidding and pretends to be um you know fiercely opposed the two sides of it but in fact the thing they're united about is not allowing something else to compete with them for power so it's you know the wolf in sheep's clothing is in charge of the thing that is supposed to be protecting us from wolves um in any case we don't have to go too deep there but this this is actually super important go for it this is related to the thing we said about as the market as a whole gets bigger then the individual consumer stays an individual consumer but the supply side the company gets much larger as that happens the asymmetry of the war between them of the game theory between them gets larger and so uh manufactured demand becomes a more intense thing well the same thing is true in terms of the market capacity to influence the government and the market government complex's capacity to keep the population from getting in the way of the extraction and so there's a heap of mechanisms that happen and there's not like five guys at the top who are coordinating all this it's a shared attractor or incentive landscape that orients it yeah largely emergent yeah and where there are people conspiring it's because their shared incentive and capacity to do so which so the conspiracy is itself an emergent property of the incentive dynamics which then in turn doubles down on the types of incentive dynamics that make things like that succeed so okay let's take a couple examples if people haven't read it they should all read at least the wikipedia page on public choice theory a school of libertarian thought that critiques why representative democracy will always break down that the founders of the us basically said this um which is all right we'll come back to cemetery for a moment at the time that we were creating this structure of liberal democracy the size of choices and the speed of them was smaller and slower such that the town hall was a real thing and when the town hall is a real thing the coupling between the representative and the people is way higher right because the people are actually picking representatives in real time that are really representing their interests and they get to have a say in it there was a a statement by one of the founders of the country that uh voting is the death of democracy because the idea is we should just be able to have a conversation that is good enough that we come up with a solution and everyone's like that's a good idea if we can't then we vote but that means that some big percentage close to half the population feels unhappy with the thing that happened and so it's a sublimated type of warfare it's a sublimation of violence but that leads to a polarization of the population and so the goal is not voting voting is the last step of when we couldn't just succeed at a better conversation and specking out what is the problem what are the adjacent problems what are the design constraints of a good solution can we come up with a solution that meets everybody's design constraints as best as possible okay so i disagree with this at one level as i'm sure you will as well i'm not sure but i suspect but i love something about the formulation that voting is itself a kind of failure mode right that ideally speaking if you had a well-oiled machine if you had a uh you know a military is the wrong analogy here but let's say you had a you know a ship of people fighting impossible odds to make it back to safe harbor right the point is you really shouldn't want a system in which you're voting between two different approaches to the problem you should want a discussion in which everybody by the end is on board and if you try to do that in civilization we'd never accomplish anything right you effectively have to give the majority the ability to exert a kind of tyranny over the minority in order to accomplish the most basic stuff but that's because the system is incapable of doing what a better system would do which is to say this is the compelling answer and you're going to know why by the time we decide to do it wait there's a cemetery here between the conversation that we had about the market in sending people who focus on the opportunity and not the risk such that it actually suppresses those who look at the risk once you say hey there's always going to be somebody talking about a risk that isn't going to happen we'll innovate our way out and that becomes the story now you have plausible deniability to always do that once you say there's no way to get everybody on the same page we can't do that it'd be too slow now i don't even have any basis to try right and so i don't ever even try to say what is it that everyone cares about relative to this so i even know what a good solution would look like to craft a proposal no we're going to vote on the proposition having never done any sense making about what a good proposition would be and that's just mind-blowingly stupid right and so then who's going to craft the proposition a lawyer lawyer is paid for by who some special interest group and so now so most of the time what happens is you have some situation where one thing that matters to some people has this proposition put forward that benefits it simply in the short term but it externalizes a harm to something that matters to other people but ultimately all of it matters to everybody just differentially weighted and the how do we put all those things together so okay we're going to do something that's going to benefit the economy but harm the environment well everybody cares about the economy and everybody cares about the environment but if i put forward a proposition that says in order to solve climate change we have to agree to these carbon emission controls that china won't agree to and therefore china will run the world in the 21st century and we all have to learn mandarin or be like the wigar or something okay well now i have a bunch of people who because they hate the solution space because it harm something else they care about don't believe in climate change it has nothing to do with not believing in climate change not caring about the environment it's that they care about that other risk so much as well but if i said okay well let's look at it's a negotiation tactic is what you're saying that at the point that you want x prioritized over y you'll potentially you'll descend into a state in which you'll make any argument that results in that happening including why doesn't exist exactly because i'm so motivated by this other thing and the solution has a has a theory of trade-offs built in that is not necessary sometimes the theory of trade-off is necessary but oftentimes a synergistic satisfier could be found but we didn't try in the same way that a way to move forward with the opportunity without the risk could have happened we could have found a better way to do the tech that internalized that externality we just need to try a little bit more but there isn't the incentive to do it so let's say we said no we don't care about climate change by itself we care about the climate and we care about the economy and we care about energy independence and we care about geopolitics and we're going to look at the adjacent things we're making a choice in one of the areas necessarily affects the other area and we're going to bring those design constraints together and we say what is the best choice that affects these things together then we could start to think about a proposition intelligently we don't do this in medicine either we make a medicine to solve a very narrow definition of one molecular target of a disease that externalizes side effects in other areas without addressing upstream what was actually causing the disease and then the side effects of that med end up being another med and then old people die on 20 meds of iatrogenic disease so in complex systems you can't separate the problems that way you have to think about the whole complex thing better and so so the first part of fixing one part of fixing democracy that we have to think about is we have to define the problem spaces better more complex and we have to be able to actually have a process for coming up with propositions that are not stupid and intrinsically polarizing because almost no proposition ever voted on gets 90 of the vote it gets 51 percent of the vote which means half of the people think it's terrible and so what that means is you care about the environment i care about the economy on proposition a well you petition to get the thing to go through because you care about the owls there but i think that you're making my kids poor you're my enemy now and i'll fight against you now all the energy goes into internal friction fighting against each other and any other country that's willing to be autocratic and force all their people onto one side will just win and we will increasingly polarize against each other over something where we could have found a more unifying solution now this is fascinating for one thing you blazed by it there but i think so there's a place where jim rutt tells me that some place that you and you and he overwhelmingly agree also but there's a place in which you and he have hung up where he says that you believe that a properly architected system can do away with the trade-offs right right i think i just heard you give the answer that he must have understood to be that but wasn't it am i right that the answer there are lots of times when you don't see a trade-off because you have two characteristics both of which are sub-optimal and you could improve them simultaneously and so it looks like there's no trade-off between them if you push it far enough you'll eventually reach the efficient frontier where you do have to choose but if you're not near the efficient frontier there's no reason to treat it as a trade-off is that yes i'm not saying that we get out of having constraints i'm saying we can do design by constraints much better than we currently do and so i'm saying that there's a lot of things that we take as inexorable trade-offs that aren't well so you and i will have to chase this down at some point my argument will be any two desirable characteristics have an inherent trade-off between them even if you never see it right there are reasons you wouldn't see it but that if you push these things far enough you'll find that there are no desirable things that can be components of the same mechanism that will not exhibit a trade-off relationship right uh initially i don't agree with that at all but i'm sure you've thought about it a lot so i'm curious why you say it well let me give you let me give you the example i used to battle my friend scott pakur uh over with this which is he said why can't you make a car that's the fastest and the bluest right and it you know the first time i heard that i was like well okay maybe blue is trivial enough but it's not in fact if you wanted to make a car that was the fastest and by fastest let's say fastest accelerating well you're going to have to decide how to paint it if you also decide that there's some color of blue that is bluest and you want the car to be that color well then it has done a lot of the choosing of what paint you're going to put on it at the point you decide to paint it that color that paint will have components that will weigh something right the chances that the bluest whatever you define that to be is also the lightest and has the best laminar flow characteristics are essentially zero right because they're an infinite diversity of colors they will be made out of a wide variety of materials and the chances that the blue just happens to be the one that is lightest and has the best you know slipperiness relative to the wind are going to be vanishingly small and that means that if you want to make truly the fastest car it's its color will be chosen by whatever paint has the best characteristics and if you want to make it the bluest as well you'll make some tiny compromise that uh will you know probably not matter to you but it's there so the trade-off is there even if we don't see it but here's the thing daniel i discovered many years after my argument with scott was long since put to bed that i was right about this and the way i found out was that there is a a case where the navy wanted to set the time to climb record for an aircraft and they took an f-15 and they souped it up a little bit and in order to set the basically the vertical climb rate of this aircraft they stripped the paint off it and so if you look at pictures of this aircraft uh in its you know uh it's record setting run it isn't any particular color it's many different colors because effectively you've got the bare metal underneath with the paint stripped off it to save however many pounds of paint they were able to remove okay there are three points that come up to um address my initial thoughts on this here so one is with this particular case of a car the difference between the blue and the optimal color might be at the boundary of measurement itself yep and so while it's true that there it might not be a perfect optimum of both at the level of like a a nano scale optimization it is irrelevant to the scale of choice making for the most part and when we look at something like 100 and when we look at something like tesla cars they became faster off the line than ferraris and safer than volvo's and greener than prius's at the same time you could see that ground up design just doing a better job of ground up design was able to optimize for many things simultaneously so much better now had they made it less comfortable could it be faster still sure of course um so it's optimizing for a product of a bunch of things together but still in a whole different league than things had been previously now first of all this this is beautiful okay because this is exactly what i was hoping for okay um this is a question of us tripping over each other's language jim misunderstood what you were saying right and he asked me about it and i said uh yeah daniel can't be right about that if he's saying what you think he's saying but of course it wouldn't make sense that you would think that you could so your point about this being trivial you're in complete agreement with me and i suspect it would take nothing to get jim to agree to that formulation as well wait there's a difference there's one more thing i have to say here okay of course i'm not pretending that thermodynamics don't exist right and once you get down to the the quantum scale arrangement of the thing that orientation in one direction doesn't have effects on other things of duh yep there's a difference also between the blue and fast are two different preferences that are arbitrary that both want to be associated with a car that don't have some um intrinsic unifying function and we can say blue is a thing that's reasonable to be preferential about color whereas i would say that there are some characteristics that have a synergistic effect that increasing one increases the other one because of the way they are part of a overall increase in system integrity and so synergy is the key concept i'm trying to bring about here which is behavioral systems more than the sum of and unpredicted by the parts taken separately so when i say i'm looking for synergistic satisfiers the idea that i have x amount of input and that input has to be divided between these various types of output and it's linear is nonsense i can have i can have x amount of input and have something where the total amount of output has increased synergy based on the intelligence of the design the question of how do we design in a way that is optimizing synergy between all the things that matter becomes the central question yes which is of course the central question that selection must be dealing with in generating complex life and you know i don't again i don't think we have a hair's breadth of difference on what we turn out to believe about this trade-off space but what i would say is and i don't want to drag the audience too far down this road it's probably not worth it for what we need to do here but the benefit of being able to say so let's take your example of there are certain characteristics that will co-maximize not really because of the following thing let's say that we figure out what color is best for making the fastest car and then we say well i want to maximize grade 37 and speed now i can do it i can maximize gray 37 and speed because it just so happens that gray 37 is the color that has the best characteristics for speed right but then the point is you can't separate these two things whatever characteristic it is you're actually maximizing you've just found two aspects of it so your point about synergy is that perfectly aligned characteristics we could describe that joint that fusion of those two things as one thing and we could maximize it right but then if we take the next one over right the next characteristic that we want to add to the list of things then again we're back in trade-off space so my only point here is that there is a value in order to be able to get the maximum power out of a trade-off theory what we want to do is make it minimally complex and the ability to say every two desirable characteristics have a trade-off between them the real question is the slope or the shape of the curve right and that many of these slopes and shapes mean we will see no meaningful variation on it because one side is a bargain and we will always see that manifest right that's the reason we don't see trade-offs everywhere is that in some cases a trade-off is so dumb that we don't see anybody exercising variation everything has made the same decision yes and i think for all practical purposes we agree that being able to make a tesla that is safer than a volvo and faster than a ferrari and greener than a prius is a possibility and that we can if we apply that to all of the problems in the world we could do a fuckton better job yeah i think we also agree and i love the last point that you made that to the degree that two things can be simultaneously optimized they can be thought of as facets of a deeper integrated thing yep okay so now to answer the way that i actually think about it though this is irrelevant if people disagree it doesn't matter at all to the earlier point i have to wax mystical a moment um when einstein said it's a optical delusion of consciousness to believe there are separate things there's in reality one thing we call universe uh and everything is a facet of it if i look at the real things that we have theory of trade-offs between in the space in the social sphere and the associated biosphere that we're a part of so let's say we talk about in the very beginning of our conversation individual what would optimize my individual well-being and what would optimize the individual and what would optimize the well-being of all humans i believe that i only find that those are differently optimized if i again take a very short-term focus if i take a long-term focus i find that they are one thing because the idea that i'm an individual and the idea that humanity is a separate thing is actually a wrong idea they're facets of an integrated reality and that if i factor all of the things that are in the unknown unknown set over a long enough period of time they're simultaneously optimized and this is the essence of dialectical thinking is looking for the thesis and the antithesis and not voting between thesis and antithesis but seeking synthesis that's at a higher order of integration and complexity totally agree and uh you know so i i don't know how many people will be tracking it but you know effectively saying on a indefinitely long time scale these things converge is an acknowledgement that we are not talking about design space when we make this recognition right it's more like trajectory and that is perfectly consistent and frankly i think if everybody understood at some level the kind of picture we're painting people would be really comfortable with the degree to which it doesn't do exactly the thing they most hope it will right in other words the level of compromise is small right what you which is why the compromise in a healthy democracy even was tolerable even though that was nowhere near as optimal a system systems we could develop okay there's a point a number of minutes back that i want to return to and i want to drop an idea on you it's actually a place where something you said caused me to complete a thought that i've been working on for some time so the thought as it existed is that markets are excellent at figuring out how to do things and they are atrocious at telling us what to do in other words they will find every defect in human character and figure out how to exploit it if you allow them to do that but when you have a problem that you really want solved right how can we make a phone that doesn't require me to be plugged into the wall allows me to get a message across a distance to report an emergency whatever markets do a better job than we could otherwise do of figuring out what the best solution is and so in some sense the question is how can we structure the incentives around the market so that markets only solve problems that we want them to solve but they can be free to solve them well and what what i think i realized in this conversation here is that in some sense the role of the citizenry in a democracy is to discuss the values that we want government to deploy incentives around in other words the people by deciding what their priorities are what their concerns are which problems are top of the list to be solved and which ones could take a back seat that that's the proper thing that we are to be discussing that the role of government freed from corruption would be to figure out what incentives will result in the best return on our investment structuring the incentives of the market and then the market can be freed to solve the narrowest problems on that list and i think we fail at every level here but from the point of view of what we're actually shooting for i would say it's somewhere in that neighborhood that division of labor between the citizens the apparatus of governance and the market i'm suffering a little bit here because there's like 10 simultaneous threads that i really could address that are important and i know we're going to open up more in starting i it would be really fun to go through the transcript of this and come back to the most important threads might be worth doing actually so first i want to say something against heterodox market theory is i don't think the market is the best system for innovation of a known what um and i think uh world war ii and the manhattan project is a very clear example in the apollo project and our on our failure at fusion i would say that the point you're about to make to me fusion would be our top priority because it's the only plug-and-play solution to a large piece of our problem and the fact that we decade after decade are awaiting a proper fusion solution says you know despite the fact that the market could potentially solve it the problem is the investments are too large on the front end and the reward is too delayed for the market to actually even recognize the problem correctly venture capital is not going to put up the amount of money that a nation's day can for the amount of time that's necessary and when you look at the very largest jumps in innovative capacity a lot of them happen by nation state funding not market funding and then a market emerging in association with kind of government contracting and so if we like if we look at why the nazis were so technologically farther ahead than everyone else going into world war ii with the enigma machine in the beginning of computing with the v2 rocket it was not a market dynamic it was a state dynamic where they invested in science and technology development for a long time which is why this tiny little country with limited industrial supply capacity had more technological advancement than the soviets or the us and it was it was our ability to steal their and rip it off and then be bigger than them that was a big part of how we were able to succeed in the war effort and so that's a clear example that like computers were developed by a state not the market right hold on a second i want to be careful because i don't want to falsify something that isn't false i again think this is a place where our mappings uh or at least the language surrounding them is going to upend us because this sounds like a place where a government is capable of generating a massive incentive to cause a problem to be solved that the market won't even find on its own right so that does not strike me as inconsistent with what i was just saying the state recognizes there's a problem creates an incentive big enough to find the solution and that incentive can be uh big enough to cause people to get different degrees than they would otherwise seek and uh but in these cases it wasn't like so let's take the manhattan project it wasn't private contractors that solved it because the government had made the incentive it was actually government that solved it it was government employees and so this is a this important distinction nasa was not a private space contracting thing that did the apollo project it was a government project so i would say the largest jumps we ever made in tech did not happen in the market for the most part well so then i guess the test of your falsification here is the following question if the manhattan project had consisted of a state yanking people out of their beds and standing over them with rifles would it have worked i mean maybe you know the russian version is closer to that um but i i think the point is you still have a you have a system of incentives correctly solving a problem that the market would not have found on its own and no entity in the market would have been big enough to solve so i still see it as consistent but you might you might convince me otherwise especially if it turns out that a negative incentive would be just as effective at creating the solution there's a there's a story that people don't innovate well under duress the innovation requires executive function and prefrontal function and if they're too limbically oriented they won't innovate well which is one of the reasons why we need an open society and i think there's probably some truth to this but less truth than we would hope i believe it was called the sharashka system which was a russian uh basically prisoner of war type camps that had scientists that were doing real innovation up to you know early sputnik-like work so we know that people under rifled arrest can innovate we know that people conscripted by draft into an army can actually innovate on behalf of the military now i think that it's true that something more like a market will explore more edge cases that are not known once and come up with interesting things whereas the centralized thing can do a better job sometimes of existing watts that require very high coordination because if you look at the manhattan project the scale of the budget and the scale of coordination no company has that and a bunch of companies competing for intellectual property and whatever it wouldn't have worked right right one of the reasons i bring this up is because there's a whole bunch you mentioned fusion whether it's fusion or whether it's um thorium or whether it's closer to room temperature superconduction or any of the things it could possibly generate whether it's 65 percent efficient photovoltaic through nanotech there's a bunch of things where we're like we kind of know the science that could lead to the breakthrough but the level of investment just isn't there um and i think there's a heap of examples like this where the percentage of the budget of the national budget that used to go to r d has went down a lot and it shouldn't and the apollo project was kind of the last thing of its type and then the government starting to shift to government contractors started to be a source of massive bloat where the government contractors had an incentive to just charge whatever the they wanted which is why then elon could beat lockheed and boeing at rockets so much cost-wise because then in that situation he didn't have to do the fundamental innovation on rocketry he could just out compete them with market incentive and then that could create enough money for iterative innovation i think fundamental innovation of certain scales does require larger coordination than markets make easy okay so then i want to modify what i said because you've convinced me i didn't have it right in the initial one so the the point then is you have to extend the governmental structure so that it can deal with two types of market failure one surrounding the natural system of incentives which will cause you to innovate things that do net harm for example and the other is a failure where the scale of the market is not sufficient to solve certain problems that are in our collective interest to solve yes and we don't want to give the government that much power because we don't trust that kind of authority but that's because the people aren't checking the government which comes back to the thing that we talked about earlier and now this becomes one of the central questions of the time is what is the basis of legitimate authority and how do we know and what is the basis of warranted trust because we all know what it means to have trust that isn't warranted we everyone who disagrees with us we think that their trust isn't warranted right like if if we're on the left we think people who believe in who trust trump it's unwarranted and they think that the people who trust the fda or vaccine scientists or the cdc have trust it's unwarranted we also know that legitimate authority the idea of legitimate authority is so powerful to be able to be the arbiters of what is true and what is real that anyone who is playing the game of power has a maximum incentive however successful they are to be able to capture and influence that for their good we also know that it's possible to mislead with with exclusively true facts that are cherry picked or framed so i can i can cherry pick facts on one side or the other side of a gaussian distribution and tell any story i want that will make it through a fact checker so fact checking is valuable but not even close to sufficient um so i can lie through something like the atlantic as well as i can lie through something like breitbart through different mechanisms for different populations yeah this this is a super excellent point as well that a fact checker airs in one direction and if you can build a falsehood out of true objects that have been edited um then the fact checker won't spot it so love that point and so i can do a safety analysis on a drug and i'm not looking at every metric that matters i'm looking at some subset of the metrics and it might be that it's safe on those metrics but in but all-cause mortality increases life expectancy decreases but i only did the safety study for two years so i wouldn't notice that so i can say no methodologically this was perfect and sound it all just also doesn't matter because it wasn't i wasn't measuring the right things right and so this also uh basically what you have just said means that the replication crisis can be understood as a mechanism for generating data which can be cherry picked to reach any conclusion you want about the effects of this intervention or that intervention right because effectively what you have is the ability to choose between experiments where sampling error will result in uh both outcomes being evident somewhere this is another one of those is it conflict theory or information or mistake theory thing is i can intentionally manipulate an outcome that looks methodologically sound and then say oh we just didn't know those factors right i'm not saying that whether that's happening or not it certainly can happen okay so now we get back to so what is the how do you have a legitimate authority that has the power of being the arbiter of what is true and real and all the power that's associated and have it not get captured by power interests is a very very important question how in the name of the bible and christendom and jesus saying let he who has no sins cast the first stone did we do the inquisition right like weird mental gymnastics by which the authority of that thing was able to be used for the power purposes of the time and so now when you start to have increasing polarization between the left and the right and historically more academics being left leaning and the social scientists the social sciences being so complex that you can cherry pick whatever the you want and do methodologically sound and yet still misrepresentative stuff then you say is that actually a trustworthy source and then we say okay well do we want a bunch of wacky theories going out over facebook and twitter and whatever do we want to censor it well if we censor it who is the arbiter of truth that we trust if we don't censor it we're appealing to the worst aspects of everyone and making them all worse in all directions like those both suck so bad and that's the oppression or chaos right and the only answer out of the oppression or chaos is the comprehensive education of everyone in the capacity to understand at least three things they have to increase their first person second person and third person epistemics their third person epistemics is the easiest philosophy of science formal logic their ability to actually make sense of base reality through appropriate methodology and find appropriate confidence margin second person is my ability to make sense of your perspective can i steal man where you're coming from can i inhabit your position well and if i'm not oriented to do that then i'm not going to find the synthesis of a dialectic i'm going to be arguing for one side of partiality harming something that will actually harm the thing i care about in the long run and then first person can i notice my own biases and my own susceptibilities and my own group identity issues and whatever well enough that those aren't the things that run me when i look at kind of the ancient greek enlightenment the first person was the stoic tradition the second person was the socratic tradition the third person was the aristotelian tradition there's a mirror of all those in modernity we need a new cultural enlightenment now that where everyone values good sense making about themselves about others about base reality and good quality dialogue with other people that are also sense making to emerge to a collective consciousness and collective intelligence it is more than our individual intelligence and with so that we have some basis of something that isn't chaos but that also isn't depression because it's emergent more than imposed so it's like it's cultural enlightenment or bust as far as i'm concerned all right so i don't disagree with you fundamentally i believe this is a place where when i say my version of this which is much less sophisticated in some ways and focused elsewhere but when i say my version of it i lose people because my version of it is something like [Music] what we need to do is doable we can see the trajectory from here you can't see the objective but you can see the direction to head and it will take three generations to get there right i agree what you're describing you couldn't just simply take that curriculum and infuse it into any system we've got and have any hope of people learning it or give a giving a about it or whatever it wouldn't work so you have to build the scaffolding that would allow a population to be enlightened in this way such that the governance structure you're imagining might arise out of it could flourish but let's put it this way it's at least at least three generations out before you had gotten there even if you started doing things right now and so what i try to say to people in order that they don't completely lose interest in the possibility of a solution because it's too far out is things can start getting better right away we are not going to live to be in that world that is the objective and even if we did we would never be native there right our developmental trajectory will have been completed in a world that doesn't function like that and so you know you you can be happy as an expat but we would be expats in the world we're trying to create and that's fine you know if our grandchildren or our great grandchildren were native there and we could be expats there that that would be a perfectly acceptable solution but i think in general people have the sense that a solution sounds like something that we could have in the next few years and i just don't see the possibility of it no you're going to see things anything that can be implemented quickly you want a red team and say either how does it fail or where does it externalize harm and also what arms race does it drive of whoever doesn't like it and if you factor what arms race it drives where it externalizes harm and where it fails you'll get much less optimistic about most of those things and if you don't go into despair you'll start thinking long term and things that converge in long-term direction and when you start to think about that the thesis and the antithesis are both not true they have partial truth but they are not actually true synthesis is in the direction of more integration of truth and still not true but in the direction of if i optimize for one of these it will externalize harm in a way that messes the whole thing up right and that's why there's there's a forcing function of the failure modes on both sides that's why it's important to look at oppression and chaos and say these both create failure modes so what is it that doesn't orient in either those directions it's not more power to authorities it's not more pure libertarianism it's something that's outside of that access or it is going to involve the equivalent of negative feedback right in other words right thermostat works by virtue of not embracing it being hot or cold but by pushing it in the right direction as it diverts one way or the other so i very much like your point about synthesis here just to make it clearer synthesis is two things even linguistically speaking we can talk of a synthesis right which is an object you could write it in a book a synthesis between several different concepts could exist in a book incidentally that's sort of what i see myself doing in biology is synthesis um but your point is the most important aspect of synthesis is it is a process right and so that process is the thing that takes these competing failure modes and rescues from them something that uh suffers neither consequence and heads towards optimality so i agree we have to get yeah so synthesis is an ongoing process and let's say i have some bits of true information and a thesis and some bits in the antithesis so the synthesis will have more bits in either of them higher order complexity but it will still have radically less bits of information than all of reality about that thing the model is never the thing right it's just it's the best epistemically we can do at that moment so now i want to go back to the earlier topic around theory of trade-offs that you said because i i let it go but as soon as you mention optimization i have to bring it back because it comes back exactly here and it also brings back this question you had of that markets can do a good job with the what with the how but not the what which is the is odd distinction that comes up in science right yes it is science can do a good job of what is but not what ought which means applied science i.e technology i.e markets can do a good job with make with changing is but not in the direction of ought and so that is ethics which is to be the basis of jurisprudence and law that's exactly why you bring those things together the and it's because is is measurable third person measurable and verifiable repeatable it's objective it's objective right whereas ought is not measurable in a you can do something like sam harris doesn't moral landscape and say it relates to measurable things but it doesn't relate to a finite number of measurable things there's a girdle proof that whatever finite number there are some other things that we end up finding later that are also relevant to the thing that weren't part of the model that we were looking at and so um the thing that is worth optimizing for you talked about the blue and the fast would be part of the same thing the thing that is worth optimizing for is not measurable it includes measurables but it is not limited to a finite set of measurables that you can run optimization theory and have an ai optimize everything for us yeah i agree you will have a long list of characteristics that you can measure and as you go from the most important to the least important you'll eventually drop below some threshold of noise where you're not noticing things that contribute so yes it you've got a potentially infinite set of things that matter less and less and you will inherently concentrate on the biggest most important contributors up top and that's natural it's it's a it's an issue of precision at some level but one that we we shouldn't convince ourselves that we're um solving the puzzle completely at a mathematical level an engineering solution is not a complete mathematical solution right okay so now i'm come back to the waxing mystical thing and it's i don't think it has to be thought of that way um i don't i think the way einstein was doing it he says spinoza's god is my god i'm happy to do it that way so the first verse of the tao dei ching is the dao that is speakable is not the eternal dao right the optimization function that is optimizable with a narrow ai is not the thing to optimize for is a is a corollary statement and and the jewish commandment about no false idols is that the model of reality is never reality so take the model as this is useful it's not an absolute truth the moment i take it as it's an absolute truth and i become some weird fundamentalist who stops learning who stops being open to new input and in optimizing the model where the model is different than reality i can harm reality and then defend the model so i always want to hold the model with this is the best we currently have and in the future we'll see that it's wrong and we want to see that it's wrong we don't want to defend it against its own evolution and so what we're optimizing for can't be fully explicated and that's what wisdom is wisdom is the difference between the optimization function and the right choice oh i love this this is uh this is great obviously it dovetails with the basic sense of what metaphorical truth is and the recognition that actually metaphorical truth isn't something that applies to religious style beliefs it's actually the way we do science also you know we have approximations and you know things get ugly when people forget that that's what they're dealing with right and they start really treating it as the object itself a very important example in my field is the instantiation of the term fitness right which in most cases has so much to do with reproductive success that we actually just synonymize them most of the time and we speak as if they're interchangeable which is great except for all those cases where they go in opposite directions which we are perennially confused by and so anyway sooner or later i will deliver some work that will take the cases that we can't sort out because we've misdefined fitness and forgotten that it was a model in the first place and shows how you would solve it differently if you defined fitness in a in a tighter way but uh story for another day all right so where should we go you uh you were on a roll so you'll see conversations from really smart people like nick bostrom and max tegmark and whatever of because of the collective action problem and the multi-polar trap race to the bottom and yet because of the complexity of the issues that we face that are beyond what the smartest person could manage by a lot is the only answer to build a benevolent ai overlord that can run a one world government because it can process the information to make good choices so as you can guess my answer is vigorously no yep not just because i think the optimization function that it would run no matter how many variables would end up becoming a paperclip maximizer but uh i think its own existential risks are bound up in that process these guys know know this but it's easy to pick solutions like that compared to the other ones that seem maybe even more likely to go terrible so then we say okay we don't want a one world government run by any of the people we currently have and we also don't want separate nations where any of them that defect lead everybody into a race to the bottom so that means that they have to have rule of law over each other because they affect common spaces so how do you have rule of law over each other without it being one world government and then capture oppression or chaos at various scales and the only answer is the comprehensive education enlightenment of the people that can check those systems now obviously the founding of this country was fraught with all the problems that we know of now in particular and it was still a step forward in terms of a movement towards the possibility of some freedoms from the feudalism it came from and so i i find the study of the foundation of it the theoretical foundation of it meaningful to what we're doing right now and famously there's this quote from george washington where he says something to the effect i'm going to paraphrase it the comprehensive education of every single citizen in the science of government should be the main aim of the federal government and i think it is fascinating so science of government was his term of art and science of government meant everything that you would need to have a government of formed by the people which is the history the social philosophy the the game theory in political science and economics as well as the science to understand the infrastructural tech stack and whatever right um the hegelian dialectic the enlightenment ideas of the time but the number one goal of the federal government is not rule of law and it's not currency creation and it's not protection of its borders because if it's any of those things it will become an oppressive tyranny soon it has to be the comprehensive education of the people if it is to be a government of formed by the people now this is the interesting thing now i remember where i want to go comprehensive education of the people is a force is is something that makes more cemetery possible symmetry of power possible it's a increasing people's information access and processing is a um symmetry increasing function so everyone who has a vested interest in increasing asymmetries has an interest in decreasing people's comprehensive education in the science of government and so now let's look at the education changes that happened following world war ii in the us there is a theory there's a there's a story that i buy that the u.s started focusing on stem education science technology engineering math super heavily partly because it was an existential risk because look what happened with the stem that the germans did and now we know that a lot of the german scientists that we didn't get in operation paperclip the russians got and sputnik and so it's an existential risk to not dominate the tech space so we need to really double down on stem and we need all the smartest guys we need to find every von neumann and touring and finding there is so the smarter you are the more you want to push you into stem so you can be an effective part of the system that's part of the story but also the thing that washington said the education and the science of government we started cutting civics radically and i think it was because social philosophers at the time like marx were actually problematic to the dominant system and i'm not saying that mark's got the right ideas i'm saying the idea of okay we have a system where let's have the only people who really think about social philosophy be the children of elites who go to private schools who learn the classics and otherwise let's have people not the system up as a whole but be very useful to the system by becoming good at stem i think this is a way of being able to simultaneously advance education and the kind of education that would be necessary to have a self-governing system that's fascinating that's fascinating because of course if you have the elites effectively in charge of governance they can do exactly what you would imagine the elites would hope for which is to govern well enough that the system continues on no matter what but to continue to look out for the distribution of wealth and power and make sure nothing upends it right they'll do it they won't even realize necessarily that that's what they're doing i also love the fact you know george washington is one of these characters who it's very easy to misunderstand how good he was because you know he wasn't the most articulate founder or in you know classical terms the the smartest founder by far on the other hand an awful lot of wisdom buried in in george washington and uh this idea of you know ultimately he was looking very deeply into the future potentially to understand why the education of the populace would be effectively synonymous with the job of government and it's not because the purpose is the education but it's because that's the only hope that a democratic system will spit out the kind of solution that you want it to generate which is uh i don't know it's a very it's a very interesting analysis so it it raises something else here which is on my list of notes arising which is i noticed this pattern all over the place there's a state which is awesome very powerful in terms of what it can do but it's fragile and so it falls apart right in other words we will never have a better system as far as i can tell than science for figuring out what's true and what is possible so it's the most capable state there are measures by which it is the strongest state but it is also terrifically susceptible to market forces in fact it can't be in the same room with them right so we could look for many examples of this where something marvelous requires a very careful arrangement of conditions in order for it to survive and i'm wondering what you make of that in light of this discussion i guess it's not hard to make an argument for why that those two things go together capacity and fragility but what are we to do about it going forward because surely we're trying to build these states but do so in a robust form they go together because of synergy which is you have properties that none of your cells on their own have you as a whole there's a synergy of those cells coming together that creates emergent properties at the level of you as a whole thing but if i run all the combinatorial possibilities of a way of putting those 50 to 100 trillion cells together very few of them produce the synergy of you there's most of them are just piles of goo yeah right and so it's a it's a very narrow set of things that actually has the very high synergies and it's lots of things that are pretty entropic um and entropy is also obviously easier i can i can take this house down in five minutes with a wrecker ball but it took a year to build yup and i can kill an adult in a second but it takes 20 years to grow one so this is why the first ethic of hippocrates and of so many ethnic systems is first do no harm then try to make better but first do no harm if you if you can succeed at the maintenance function then you can actually maintain your progress functions and [Music] come back to where you were going with that well so here's here's what i'm after i agree with your basic entropic analysis that it is easier to destroy than to build the number of states that work is vastly exceeded by the organization of the same pieces that don't but what i'm wondering about is is there in effect one has to be able to build a system that is resistant to that in other words and life does this right living creatures manage to fend off entropy beautifully and the fact we need a governmental structure that has that same trick and we haven't seen it yet and the question is unfortunately i fear that it is almost a prerequisite that if you build the com the capable structure and you haven't built the thing that protects it first then it will be captured before the wisdom develops to preserve it against that force and now i remember where i used the analogy of the body what i'm going to say here is wrong so let's just take it as a loose metaphor let's take in the body that the closest thing to top down organization is neuroendocrine system but there's a bunch of bottom up that is at the level of genetics and and epigenetics and cellular dynamics and whatever and there is a relationship between the bottom-up and top-down dynamics well obviously i can take a cell out of a body and put it in addition it has its own internal um homeodynamic processes it's dealing with entropy on its own they don't need a top-down neuroendocrine signal for how they do that so let's say we tried to make a perfect top-down neuroendocrine system and the cells had no cellular immune systems or redox signaling uh homeodynamics or or anything else you would die so quickly right there is no way to have a healthy body at the level of the organization of all the cells if the cells are all unhealthy and that's the comprehensive education of the individual thing we're talking about can you make a healthy system of government as a system can you just get the cybernetics right with that is separate then that which develops all of the individuals and the relationships between them and the answer is definitely not okay agreed but then here's the problem that i'm i'm trying to articulate okay so we agree that the cells have to be coherent in an of themselves that there has to be a fractal aspect to this uh this organization of things across many scales from the individual up to the the body politic but if it is true that the key to making that work is that individuals which are analogous to cells here have to be educated in the nature of governance the theory of governance in order for this to work how would they end up that way well they would end up that way because governance will have created the conditions that would cause that education so are we not now saying that what is necessary in order for the system to function is that the system is already functional in order that it can generate the conditions necessary no there's no hole in the bucket situation there is a recursive situation between bottom up and top-down dynamics and so let's take the classic dialectic that relates to right and left it's not the only one of individual and collective for a moment and say okay fundamentally the right is more libertarian individual pull yourself up by your bootstraps we want to have advantage conferred to those that are actually doing uh they're conferring their own advantage and doing well and then the left model the more socialist model is yeah but people who are born into wealthy areas statistically do better than people who are born into shitty areas in terms of crime and education and access to early health care and nutrition and all those things and you can't libertarianly hold yourself up by your bootstraps as a infant or a fetus and so let's make a system that tends to that well but then the right would say but we don't want something like a welfare state that makes shitty people that just meets their needs for them and orients them to lay on the couch all day and do tv and crack okay i think it's i think it's mind-bogglingly silly that we take these as if they are in a fundamental theory of trade-offs as opposed to a recursive relationship that can be on a virtuous cycle what we want to optimize for is the virtuous cycle between the individuals and the society so that do we want to create social systems that take care of individuals but make shittier people no do we want to create social systems that condition people that have more uh effectiveness and sovereignty and autonomy yes and do we want to condition ones that in turn add to the quality of society yes so if we don't want to make dumb social systems right so a social system that is more welfare-like is much dumber than a social system that provides much better health care and education and orientation towards opportunity for advancement rather than towards opportunity towards addiction cul-de-sacs and so we already have some people all the listeners of your show i think we already have some people who are trying to educate themselves independent of not having a government that is doing that that and this is why i say it has to start at culture before state or market it has to boot in that direction so those people can start to work together to say how do we influence the state and to start to then influence better education for more people better media and news for more people and how do we influence it to affect market dynamics where the market dynamics are more bound to the society well-being as a whole rather than extractive i like this because we actually do see this dynamic we see people actually seeking out nuance even though we're told that they won't do it and so the other thing we're seeing is for various reasons including covid the absurdity of the educational system that we have is being revealed in a way that it never has been before so many more people are recognizing that school will flat out waste your time if you give it that opportunity and therefore they have more license than ever to seek out uh high quality insight and exercises or whatever and to discount the value that we are assured comes along with a standard degree etc so yeah i'm i'm favorable to this idea also that you just said that's interesting is okay so george washington's quote comprehensive education of every citizen science of government well how can we afford that when most of them are going to be laborers because them having a strong background in in history and in political science and social science and the infrastructural tech stack does that help them be better farmers not really it helps them be better citizens and government but not better farmers and so can how do we afford to pay for all that additional education and how do they maintain that knowledge when they're just engaged in a labor-type dynamic and so this is why the children of the elite who are actually going to become lobbyists and senators and whatever go to that private school and get that education well now we have this ai and robotic technological unemployment issue coming up and it's definitely coming up right well the things that it will be obsoleting first are the things that take the least unique human capabilities because those are the easiest to automate so labor type things so either this is an apocalypse that just increases wealth inequality and everybody's homeless and or on the absolute minimum amount of basic income so the elites can keep running the robots as serfs rather than the people of serfs and just hook the people up to oculus with a basic income so they don't get in the way or this actually makes possible a much higher education of everyone so they can be engaged in higher level types of activities um yeah yeah now i agree with that completely and i also agree you know we should make sure people understand i mean i think it was very clear the way you said it but we are headed for a circumstance in which a shift in the way the market functions and what it requires is going to cause an awful lot of people to be surplus to it all at the same time and that can only play out in a few ways none of them are good if we don't see it coming and plan for it it's coming it's not the fault of the people who will be obsoleted um and so in any case yes uh this makes sense you mentioned you look at covet and you look at how many small businesses shut down and how much unemployment happened and then how much the market rallied because six companies made all of the money of the market and if you take those companies out the entire stock market is down but it's cap weighted and you basically have network dynamics metcalf law dynamics creating winner take all economies where you have one winner per vertical the wealth consolidation the wealth inequality has progressed so rapidly that all the that the measurements of gdp and market success and the measurements of quality of life are totally decoupled they're moving in opposite directions in really important ways when you combine how intense that is and that of course the forces with the most money are the hardest to regulate because they have the best lawyers and the ability for offshore accounts and for lobbying and whatever else so how do you do anything about this combined with the fact that the debt to gdp ratio is unfixable you realize that a reset of our systems will happen because this system cannot continue and we can either do a proactive one or we get the reactive one and the reactive one worse the reactive one is going to inherently be uh arbitrary and therefore much more violent in every sense of that term and so yes you are programming some kind of a uh unfortunately none of the terms that one would like are still available to us because great reset is obviously been branded in in somebody's interest but um yes we need some sort of a reboot uh that takes um heed of this dynamic and sets us on a path where it doesn't turn into a catastrophe or it doesn't turn into a spectacular win at everybody else's expense for some party or other and unfortunately of course if we circle back to an early part of this discussion convincing people of the hazard of this the essentially the certainty that something of this sort uh will happen if we do nothing that we must do something that that something must be coordinated that you can't pass it through your inherited lens of is this left leaning is this right leaning is this for my team is this against my team convincing people of that is extremely difficult in this environment because for one thing everything we would do to convince passes through these these platforms that if they haven't flexed their muscle yet as soon as we start talking about what would need to be done to save civilization in ways that they can recognize it they will find ways to oppose it and you've had this conversation on here before that let's say we can we look at a particular group and we can predict how they're going to respond to something we're going to say with quite high accuracy so we can take a particular woke sjw group and if we have a conversation of a certain type we can predict that they'll say oh that thing you're calling dialectic is giving platform to racists when you should be canceling them therefore you're you know uh racist by association or whatever you can take a q anon group and predict that they are going to say that because we talked to someone that was four steps away from epstein in a network that we are probably part of the deep state cabal of pedophiles or whatever it is and um to the degree that people have responses that can be predicted better than a gpt-3 algorithm they can't really be considered a general intelligence they are just a medic propagator they are taking in memes rejecting the ones that don't fit with the meme complex taking in the ones that do fit and then propagating them and i think people should i think if people think about that they should feel badly about not being someone who's actually thinking on their own and being a highly predictable memetic propagator and be like i would like to have thoughts that are not more predictable than a gpt3 algorithm i would like to know what my own thoughts about this are and in order to know what my own thoughts about it are do i can i even understand and inhabit how other people think all the things that they think that so that's that's one thing because it's not only going through the filters like facebook it's going through the filters of the fact that people have these memetic complexes that keep them from thinking and so the cultural value of trying to understand other people so that we can compromise because politics is a way to sublimate warfare right and if you don't understand each other and compromise you get war and the people who are saying yes let's bring on the war they're just dumb they just don't understand what war is actually like they haven't been in it right um well i think you have brought us to the perfect last topic here now of course i'd like this conversation to go on and we should pick it up at another date but the point you make about if we can demonstrate that we know what you're going to say then it isn't a thought worthy of a human right if we can predict you and it's not by virtue of us having modeled some beautiful thought process of yours it's because your thought process looks like that of you know an indefinitely large number of other people who are totally predictable and that's nothing you should be comfortable with i think we this goes back to the question i asked you at first which is when you engage in what i would call independent first principles thinking you immediately run into challenges that somebody who's not deeply involved in such a thing doesn't intuit right and so i'm imagining a person somebody uh who is decent who has compassion has all of the basic capacities you would hope they would have who has fallen into one of these automatic thought patterns and i'm imagining you manage to sit down with them and show them that their thought pattern is automatic and totally predictable and therefore nothing that they should be comfortable with and let's say that they walk out of the room and they start behaving differently and they start thinking for themselves they stay awake right well they're going to run into some stuff because they are of course going to end up landing on some formulations that as soon as they say them out loud are going to get them punished right that is inevitable now those of us who live out here learn how to say things in ways that sometimes the punishments don't stick we learn where they are best stated we learn what we shouldn't say yet but all of this speaks to what i think is it's not we don't live in an authoritarian uh state but we live in a state in which thought is policed as if we did right not perfectly but enough that one who wishes to escape from the accepted the sanitized narrative has to be ready for what happens next and that's something that is it's very hard to generate that in other words it's a developmental process that causes you to learn how to navigate that space so somebody who just simply recognizes i don't want to be an automaton and i'm going to start thinking for myself if their next move is to start thinking for themselves and speaking openly about it what comes back next is something for which we don't have a good response earlier you said when you were defining near the beginning of our conversation what you meant by independent thinker is someone who wants to go where ever the facts and information that are well verifiable actually leave them i would say that there there's something like the spirit of science which is a reverence and respect for reality where i want to know what is real and be with what's real more than i want to hold a particular belief no matter how cherished or whatever in group it i'm a part of in the the uncomfort of not belonging with the in-group if i want to belong with anything i actually want to have a belonging with reality first and a belonging with my own integrity and then with those who also share that and that the other belongings that i give up i don't stop caring about those people i care about them still but i don't necessarily care about their opinion of me enough that i'm willing to distort my own relationship with reality all right so here's the question i want to ask about this and i'm basically trying to surface some part of my own process in order to figure out what it is can it be improved can i teach it to others to the extent that it works there are so i was on bill maher with heather last friday and i said something that got an awful lot of pushback online which i knew was coming i said he asked if i thought the probability that coveted 19 was the result of a lab leak was at least 50 percent and i said something quite honest and shouldn't have been new to anybody who'd been paying attention to my channel which was that i had said back in june that i thought the chances were at least 90 percent now i can imagine that that number would be shocking to many people but i also know that where i in their shoes i would process it this way i would say all right this person seems intelligent i don't know of a conflict of interest that number is way off of what i would calculate therefore i need to file this as a flag do i not know something maybe the person has a conflict of interest and that explains it but if it's not that how have they arrived at a number that is so far off of what i would calculate and what does it tell me in other words i would become agnostic at that moment rather than go on the attack people don't give enough benefit of the doubt to people who agree who think differently and they give too much trust to those who think the same right but then here's the the place that the thought goes so is it true that if somebody intelligent says something um that is completely inconsistent with my model of the universe that i will inherently give it enough credence to look at it it's a tough question because if i if i try some test cases if you told me that uh you believed that there was a strong chance that the earth was flat okay that would throw a huge error for me right because i know a that i've checked right in fact i have years ago and several times said what are the chances there's anything that these flat earthers that they're not just a joke and then it's a trivial matter to find out what you need to know from your own experience that is inconsistent with that possibility and so the answer is okay i'm not going to spend too much time checking with it right then we get to is the moon landing fake right this one is tougher right it's tougher because when you look at the actual evidence that people are motivated to hypothesize the moon landing is fake there are some things in it that are hard to know i don't often know what the explanation for them is so anyway my point is there are some ideas i wouldn't be shocked at all to find that you believe there's some ideas i would be so shocked that i would imagine you're kidding or um you've lost your mind or i don't know what and so we all draw that line somewhere and i guess my point is i think almost everybody even very very smart people who don't happen to be experienced in first principles independent thinking draw that line somewhere that creates a fatal error when independence is experimented with right that the number of things that you know it is the matrix in some sense once you start experimenting with what would i conclude if i was independent of all incentives and i just went based on the evidence and i gave everybody a chance to articulate their position what comes back is so jarring that most people are driven back into conventional automatic thinking because the the frightening aspects of what what they get in response are enough to drive them off the instinct yes okay god there's so much in here that's really good the thing about the flat earth is that it is the hypothesis is formally falsifiable and the alternative even by an individual yes and it's though the alternative hypothesis is formally verifiable with the best methods that we have with the highest confidence we can have and now one thing i would still say is interesting is i know many people who refer to flat earthers as the moniker of maximum stupidity who cannot do the copernican proof so they take as an article of faith that the earth is round but they actually don't know how to drive it have never tried and so then they also moved to taking as an article of faith similar things that don't have the same basis so if so does someone even understand what falsifiable and verifiable mean does someone have a basis for calibrating their confidence margin because if if i start to talk about um the moon landing or then i go a little bit further and talk about long-term autoimmune effects or epigenetic drift or whatever they come from a vaccine schedule of 72 vaccines together is the standard narrative falsifiable or verifiable is the alternate narrative falsifiable in the way flat earth is no so the fact that we put flat earth and anti-vax in the same category is a intellectually dishonest bad thing to do and um but the fact that most people don't even know how to do verify or falsify and so like with the lab hypothesis when you come to 90 i'm guessing you have a process for that what i would say is i haven't studied it enough to put a percentage because i don't have enough bayesian priors to actually come up with a mathematical number what i would say is i consider the idea of it coming from a lab and some kind of dual function gain gate of function research dual purpose gain and function to be very plausible and i have seen nothing that falsifies that and the few attempts that i saw early to falsify it were theoretically invalid to me now to be able to go from plausible to a probability number i would need to apply different epistemic tools than i have already applied well wait a second i'm not sure that that's the case because the to me as a theoretician there is a hypothesis the there are multiple hypotheses one is the virus escaped from a lab unmodified another is that it was enhanced with gain of function research and then it escaped another is that it was weaponized and deliberately released all of these things each of them is a hypothesis each of them makes predictions and they are all testable now i am not required to have any guess as to which one will turn out to be correct nor an assessment of how probable it is it is natural to have a guess but the two things function independently right as a scientist i am obligated to treat a hypothesis by the formal rules of science i know what they are i know how they work and therefore i know at what point it's going to be falsified any one of them and what would be necessary for one of them to become a theory that is to say for all of its challengers to fall now i can also say look if i had to bet here's where i'd put my money but i'm not i'm i happen to be a scientist who would be placing a bet but my bet is not a scientific bet yeah so we're we're aligned clarification agreed yeah okay good so that that is that is my hunch that i didn't come to that number through a actual bayesian or other kind of mathematical process but if i was actually trying to formally give my my percentage basis i would go through some epidemic process and then now if i had to make a consequential choice based on it the more consequential the choice is the more process i would want to go through to calibrate my confidence of it because the more problematic it would be for me to be wrong right okay so that that all makes sense but the the ultimate question here is given that we can see we want people not to behave in an automatic way in a uh a way that is below the nature of human cognition's capacity to to think and to react but we also know that when people experiment with that under the current regime it is not that they will produce conclusions that are different than they would otherwise produce say them to their friends and their friends will say oh that's interesting i didn't realize you think that their friends will say oh my god i can't believe you're one of them right and that that thing is so powerful that it is artificially depressing the degree of independent thought because anybody who has experimented with it is likely to have effectively you know touched some third rail and retreated um as a response so we don't know there's a failure mode on both sides there's a failure mode of not of creating artificial constraints where we don't explore the search base widely enough which is the one you're mentioning there's another one of exploring the search base without appropriate vetting and jumping from hypothesis to theory too fast yes and those two are reacting against each other right there are people who say because it's plausible it is they jump from hypothesis straight to theory without proof and then they believe wacky ass hit yes and they insist that it's true and then people over here are like wow that's really dangerous and dreadful and anything that looks like that i'm going to reject offhand and similarly people over here believe standard models that end up getting either captured or at least limited and people over here react against that so this is another place that i would say the polls are driving each other both to nonsensical positions well yes and the way that works in practice is there is a team that in principle knows that it is in favor of doing the analysis but it does not believe itself capable of doing the analysis so effectively it signs up for the authority of those who claim to have done the analysis and in principle have the right degrees or whatever but then we run into this thing which goes back to something you've said in several places in this discussion which has to do with the bias amongst those involved in certain behavior in other words if you're an epidemiologist at the moment or a virologist there's a very strong chance that you believe the lab leak hypothesis is stands a very low chance of being true but you also very likely have a conflict of interest you may be directly involved in the research program that would have generated covet 19 or you may simply be involved in social circles in which there is a desire not to have virologists responsible for this pandemic and therefore there's a circling of the wagons that has nothing to do with analysis but either way the tendency to converge on a consensus is completely unnatural and those who are trying who earnestly are trying to follow science end up following consensus delivered by people who claim the mantle of science while not doing the method and that is a terrible hazard yeah yeah i agree and there's one step worse which is the thing that we mentioned earlier which is you can do the method have them all of the data coming out of the method be right and still have the answer be misrepresentative of the whole because you either studied the wrong thing or you studied something too partial and so this question of what is worth trusting comes up again and is um okay i i don't want to defect on my own sense-making to just join the consensus so that i am not rejected at the same time if everyone is sure that i'm wrong and i'm sure that i'm right i should pay attention to that right because very possibly i have a blind spot and i'm a confused narcissist um every once in a while they are all in an echo chamber and i'm actually seeing something and it's it both can be the case sometimes so you're like okay do i always stick to my guns or do i always take whatever the peer review says neither this is again the optimization function isn't it wisdom ends up being a i don't know the answer to this trolley problem before i get there right so what i have to say is is the basis by which the other people all agreed that you were wrong deliberative and methodological and earnest and free of motivated reasoning does it have a group motivated reasoning that's associated with it are there you know clear blind spots in the thing you're thinking so i i don't think there's an answer to the what actually is right there there is no methodology it's the doubt that's eternal is not that the speaker's not the eternal doubt the methodology that's formalizable is not the thing that reveals the the tao right like ultimately you have to end up adding placebo at a certain point and then double blinding and then randomization the methods have to keep getting better because there's always something in the letter of the law that doesn't get the spirit of the law and then the letter of the methodology that doesn't get actual science right right and in fact so a couple things here one there's a part of the scientific method which is a black box there's a part that actually i believe literally cannot be taught right it is the part where you formulate a hypothesis right that is a that is a personal process if i taught somebody to do it my way that i don't think they do it very well right so the point is that's something that you learn to do through some process that is mostly not conscious hard to hard to teach and hard to discover but everybody who does it well does it in some different way and so at that level even just saying do the method is incomplete because not everybody can do the method um see there was something else uh oh yeah there was there was a missing thing on your list i realized you weren't trying to be exhausted but there was a missing thing on the list of possible reasons that you could come up against a consensus and still be right even if you're the only person who disagrees and it has to do with the non-independence of the data points on the other side based on let's say either a perverse incentive or a school of thought having won out and killed off all of the diversity of thought over some issue that turns out to matter and these things can very easily so i would say yes if you always think you're right and when everybody's against you uh they're wrong then yeah narcissism is a strongly likely reason on the other hand it is as you point out with tesla and their competitors sometimes you find that a field or an industry is easy to beat that there's something about them that is you know maybe economically very robust but with respect to their capacity has become feeble and this is true again and again in scientific fields that scientific fields go through a process where they a school of thought delivers handsomely on some insight it wins the power to own the entire field that insight runs its course diminishing return sets in it stops delivering anything new it doesn't give up the reins and hand them over to somebody else because there's no mechanism to do that so the people who have the school of thought that's already burned out its value stick to their you know their power and that means that the field is wide open to be beaten by an outsider who just simply isn't required to subscribe to whatever the assumptions of the school of thought are and that happens so frequently that there is this it's artificially common that you have the experience if you think independently and you know what you're doing that you'll disagree with just about everybody and they'll actually turn out to be wrong because they're proceeding from a a bad set of assumptions so i think this is actually one of the most interesting applications of blockchain or decentralized ledger technology is this idea of an open science platform so imagine every time someone did a measurement the fundamental measurement it had to be entered into a blockchain and then the other places that independently did it was entered into a blockchain so it was uncorruptable and then the the axioms and the kind of logical propositions get entered in and then the logical processes of whether i'm using an inductive or a deductive or what abductive process gets put in and then we get to kind of look at the progression of knowledge then at any point we come to realize that a previous thing in there was wrong some data was misentered or a hypothesis has proved wrong now we can come back to that point and look at everything downstream from it and reanalyze it um of course you still have the oracle problem of the entry in in the first place so if i'm doing motivated science and i get some answers i don't like and i can hide them and not enter them then that'll happen so you still have to have then the proper entry into the system but this addresses something with the integrity of science and also the integrity of government and government spending and the capture of market forces of the regulators rather than the regulators being able to regulate the market is we only know when the up thing happens if we can see it and which means that everyone who wants to do something asymmetric or predatory has a maximum incentive for non-transparency so certain kinds of uncorruptability and transparency are very interesting in what they can do towards that interesting now this actually comes back to something i wanted to raise earlier but didn't get to it which is i started out very focused on sustainability i believe sustainability is something that the system you can't you can't measure to finely if you measure too finely then sustainability becomes an absurd block to progress because you can't dig a hole in your own backyard because you couldn't dig a million such holes but if you relax the system so that you're measuring processes that actually potentially matter sustainability has to be a feature of the system long term right it doesn't have to be the feature of the system in any given time period but overall it has to net uh to a sustainable mode i wouldn't say a system has to be sustainable i would say the meta system or increasing orderly complexity has to be sustainable but that might mean a system intentionally obsoleting itself for a new better system okay i accept that but what i've realized uh down this road is that the system actually or the set of systems or the meta system however you want to describe it needs a fail-safe which i call reversibility right so the point is if you set the goal of sustainability and you say well we have to measure things that matter sooner or later you're going to fail to measure something that matters and you're going to deal with it unsustainably and at the point that you figure it out it's going to be too late so my point would be and you know this is a tough one people don't like the implications of this if they understand it but any process that you set in motion has to be something you could undo if it turns out to be harmful in a way that you didn't see coming right so that is to say you can alter the atmosphere carbon dioxide is not poisonous right the changes in concentration that change the degree of heat trapping are not um terribly meaningful to the well-being of living creatures but at the point you discover that the heat trapping is going to massively change the way uh the atmosphere functions and the oceans etc etc you have to be able to undo it now undo it means you could change the concentration back to what it was now what this would mean in practice was that you would have to slow the process of change down such that you scaled up the process that would reverse the change in proportion now if you imagine all of the disasters that we have faced all the ones i named up top and all of the other ones that look like it from you know fukushima to aliso canyon to financial collapse of 2008 and you imagine that in proportion to the process that went awry we had scaled up the reversal process so that it was there if we needed it right we would have been in a very different situation because a the process would have run away much much slower and b the tools to undo it would have been present and ready oh before you respond to that i do want to say that the only way that that would work is if it was over the entire system in other words if one nation for example were to decide that it had to adhere to a standard of reversibility while other nations weren't restricted in the same way you didn't you'd get a tragedy of the commons where the atmosphere or whatever other resource would ultimately be destroyed by the nations that didn't participate in that system and the nation that was most responsible would pay the cost of uh building a reversibility system that wouldn't work in the end but other than that i think the principle makes sense what do you think so something like sustainability having a consideration like reversibility as one of the factors to inform choice making and that that is a is a valuable consideration and that it doesn't matter at all if we don't have collective coordination capacity to be able to make the right choices period so yes agreed now regarding reversibility i think reversibility is a valuable consideration that is um impossible and important ways but it's still an important consideration so can i decrease the amount of co2 in the atmosphere if we realize we need to kind of yes but the co2 in the atmosphere went up along with a lot of mountaintop removal mining for coal and a lot of species extinction in the process a lot of people who died over wars for oil can i reverse and get those dead people back and those extinct species back and those pristine ecosystems back nope they're gone and then also reversibility over what time frame will new old growth forests come back thousands of years from now sure does that time scale matter um if i ever extinct a species is that reversible um does every species matter what about killing an individual element within it you know so it's like um i can only think about reversibility on very narrowly defined metrics but the thing that harms that one metric has lots of other effects simultaneously and so we have to understand the reversibility by its by itself is an oversimplification because we'll always be thinking upon metrics that are subsets of all that's affected yep i agree uh it is a it is a oversimplification as is sustainability but my sense is that you have to instantiate it in some way in order for the system to be safe and i would say if it prevents you from removing mountaintops as long as it prevents everybody else from removing mountaintops it's the right idea in other words if we are allowed to degrade the earth a little bit at a time by removing mountaintops now and you know drying up rivers next time then eventually you have a world that isn't very worth living in and i do believe that we have a moral obligation not to degrade the planet right that that our highest moral obligation has to be to deliver the capacity to live a complete fulfilling human life to as many people as we can and that means not um liquidating the planet it means a renewal process which is the very definition of sustainability and it's inconsistent with um removing mountaintops now lots of species don't matter right there are lots of little offshoots of species and they can go extinct and they do go extinct and nobody is harmed by their doing so which it's not the same thing as losing orcas or you know elephants or eagles or whatever so obviously you need to have a a rational threshold in which you protect against um against degradation and allow degradation that doesn't have an important implication but the question is really is it so compromised by those considerations that it's not worth considering or is it rescuable if one figures out how to apply a threshold so we said that one of the dialectics that defines the left and right in its most abstract form generally has to do with a focus on the individual versus a focus on society or the collective or the tribe or some kind of group another one is an orientation towards conservation or conservativeness traditionalness an orientation on the other side towards progress or progressiveness and again these are confused all over the place and even what we call left and right have shifted in the last you know a few decades in a number of ways but it's interesting here because when you talk about reversibility and sustainability another synonym is conservation what is it that we want to be able to conserve and so the conservative principle is focused on what has made it through evolution that is valuable enough that we should conserve it not it up and yet so interestingly the people who are often called conservatives are not focused on critical aspects of conservation um but if you you're talking about biosphere conservation right now oftentimes they're talking about sociosphere conservation the conservation of social systems and you're saying that underneath it is the capacity for humans to thrive and have meaningful lives and relationships and we would say that that is a function of the biosphere the socio-sphere and the technosphere and the relationship between them and so and we can say very clearly it's the technosphere ruining the biosphere most of the time and yet if it ruins the biosphere enough the technosphere goes because the technosphere depends upon the biosphere so we have to learn how to make a technological built environment that is replenishing regenerative with the biosphere and the sociosphere is another really critical one i think you'll probably actually have something to add to this i haven't thought of when i think of what the fundamental intuition of a conservative is even if they don't articulate it like this and the traditionalist kind of impulse which is let's go back to the constitution let's go back to christianity or european ideas or the free market or whatever it is uh or rigorous monogamy whatever social structure lasted for a long time that there there's an intuition even if they don't formally think of it this way logically that almost everything didn't make it through evolution in terms of social systems and the few things that did weren't the things that people thought would so there's a lot of embedded wisdom that wasn't understood that was very hard earned and we want to preserve that and not break it because we think we understand it well enough and we might not and that fundamentally the progressive intuition is that we're dealing with fundamentally novel situations that evolution didn't already figure out and we need innovation and of course the synthesis of that dialectic is we need new innovation that is commensurate with that which should be conserved and not everything should be conserved because some things made it through because they won short-term battles while up the long-term hole and so what things are worth being conserved what things are not worth being conserved did we understand it well enough that we didn't say this isn't worth being conserved out of hubris and then what progress is commensurate with that i think is a good way of thinking about that dialectic yeah i like it and i think there's the flip side of it as well which is that captured inside uh biblical traditions are some bits of some basically responses to game theoretic hazards that are consistent with things we've talked about so for example the christian sense that not only is the world here for humans to make use of but that we are in effect obligated to do it that belief fits perfectly in a world where if your population doesn't capture a resource somebody else is going to so in other words that belief structure travels along with a tendency to capture the resources that are available and to the extent that what that does is it causes the exploitation of a resource the tools with which those resources could have been exploited in biblical times almost always left a system that would return itself to equilibrium given an opportunity which isn't true in the modern circumstance so what we have is a place where there's lots of stuff that is um conservative that there's a very good and often hidden reason that we should preserve and then there are some places where we'd actually have to upgrade the wisdom because it doesn't fit modern circumstances and the conservation of the natural world is i think a clear case just because you mentioned this case when when people realize that christendom spread largely by holy war not exclusively but largely you need a religion that makes a lot of people that are willing to die in holy war because of a good afterlife and who you can spare right a lot large population of people that can die and war and islam and christianity both had this they both had um be fruitful and multiply and proselytize because they both had war as a strategy for propagation of the memes so you needed numbers whereas judaism didn't have it right and quakerism and some other ones didn't judaism had to actually make it hard for people to join the religion because you're not going to lose a lot of people as soldiers you're going to embed yourself as a diaspora within dominant cultures and end up affecting the apparatus of those cultures so it's interesting to think about how those different memeplexes had different evolutionary adaptations but it's important for the reason you mentioned is that those traditions were influenced by politics and economics and war and philosophy and culture and a lot of things so you can't wholesale throw them out or keep them or like you have to actually understand what allowed those memes to propagate and what their memetic propagation dynamics were and so that conservative impulse it says the things that made it through made it through for a reason yes but some of the things that made it through for a reason won't keep making it through dinosaurs were around for a long time and then they weren't right so um and as we've mentioned evolution can be blind and run very effectively into cul-de-sacs and yet the other side is all too often we will criticize a tradition for being dumb when we don't understand what made it work well enough and we throw something out that was actually worth not throwing out so how do you do a deep enough historical understanding to be able to decide what should be conserved and not is is also a really good question it's a really important question because it's it's chesterton's fence factory effectively right nobody knows what what actually was functional and what you know uh had no function but traveled along with it because they were paired very closely in a biblical text and uh you know what functioned in ways that we don't want it to function now these things are all invisible because the whole thing is encoded in myth so it's not in there right so yeah that's a huge hazard and it's a tough one for those of us who want to build reasonably and recognize that there's an awful lot that we have to do that's novel because it hasn't been accomplished before we have to grapple with the fact that it's not like these traditions are simply backward some of them are very insightful and non-literal and uh we need to exercise great caution caution in approaching them okay so i want to come back to your three generations at least problem it's easy to look at the nature of the problems and just assume that we are and usually to tie that to some conversation about human nature and to say okay well we were able to figure out technology that was extraordinarily powerful to speak mythopoetically the power of gods the nuke was clearly the power of gods right and then lots of texans then we can genetically engineer new species gain a function whatever without the love and wisdom of gods that goes in a self-terminate direction is it within the capacity of our nature to move towards the love and wisdom of gods to bind that power or are we inexorably inadequate vessels for the amount of power we have so then i do a positive deviant analysis to look at what are the best stories of human nature to see if they converge in the right direction and then also where there are conditioning factors that we take for granted because they become ubiquitous and think that they're nature so if we if we go back to the bible for a moment we look at jews and we look at was there a population of people that were able to educate all of their people at a higher level than most other people around them for a pretty long time in lots of different circumstances yes you look at the buddhists were there a population of people that across millennia and different environments were able to make everybody peaceful enough to not hurt bugs yes across all the genetic variants and across all of the economic factors and whatever else do we have examples a very high level of cognitive development and very high level of ethical development of different populations based on cultures we do and then we say oh well but you know look at how well the founding fathers ideas failed here well the the comprehensive education of everyone is not in the interests of the elite that have the most power as we mentioned and so making it seem like that that's an impossible thing is actually really good to support the idea that there should be some kind of nobility or aristocracy or something like that there should be to control because they're more qualified i would say that we have not in modern times ever tried to educate our population in a way that could lead to self-governance because there was no incentive to do so or those who had the most capacity had incentive to do something else even when they said they were doing that so do i think that it's possible do i think that we have examples historically of people who developed themselves cognitively and ethically enough that if we did those together right buddhist jews however we want to talk about it do i think that's possible within human nature and basically untried yes yeah i love that and i agree with you it's dependent on something which we might as well spell out here which is that the capacities the difference in capacity between human populations is overwhelmingly if not entirely at the software level which i firmly believe i'm speaking as a biologist i've looked at this i will have to defend it at length elsewhere but the degree to which it's software that distinguishes us and therefore we can innovate tools we can democratize tools all of that is at our disposal and i agree with you it hasn't been tried and it might be our only hope but at least we've got prototypes now i will say why i'm grateful for what happened at evergreen is that you wouldn't be here doing this otherwise and on bill maher and you're you and heather are both exceptional educators and so the fact that your tiny little niche for education got blown up so that you took this quality of education to all the people who were interested uh this larger scale i'm really happy about because i'm my friend this exact thing is the thing that has a chance is a strange attractor of those who are called to a cultural enlightenment starting to be come to a come together in a way that can then lead them to coordinate to build systems that can then propagate those possibilities for other people well i really appreciate that and i uh i must say i feel it as a calling as i'm certain you do and so uh yes and i also love the point that you made earlier about the fact that the audience for this really is people uh seeking a kind of uh enlightenment and community and so yes um as much as you and i both focus on uh existential risks there is hope in that yeah okay daniel well this is uh i think we've gone more than three hours it's certainly been a great conversation and there are so many threads that are worth revisiting which we should do sooner rather than later this was super fun i really enjoyed it yeah it was um so daniel schmuckenberger where can people find you well you mentioned in the beginning we have something called the consensus project that will be launching soon uh via a newsletter in march and then a website in a few months um so tune back in on that and it's a it is a project in the space a non-profit project that is seeking to do a better job of news with education built in so we actually make the epistemics we look at very complex issues that are polarized and we make the epistemics that we're applying explicit so we're actually teaching people how do you sense make complex situations in situ and then if anyone ever thinks we missed any of the data about something wrong they can let us know and we'll publicly correct it and credit them if that's right etcetera so um and the goal there is helping to catalyze cultural enlightenment of this type and recognizing that both education and fourth estate are requisite structures for open society and open society being rebooted has to be rebooted at the cultural level first um right now uh you can find me on facebook or one of those platforms or have a blog an old blog everything's out of date on it civilizationemerging.com civilizationemerging.com and uh are you not on twitter and does that explain how you're so clear-headed i'm not on twitter um and i'm on facebook because it uh because of metcalf law um because everyone is so it ends up being a useful introduction and messaging tool but yeah i'm not part of the twitter crew more power to you all right daniel this has been a pleasure and i look forward to our next one uh be well and uh everybody else thanks for tuning in thanks
Info
Channel: Bret Weinstein
Views: 266,352
Rating: 4.9241943 out of 5
Keywords:
Id: YPJug0s2u4w
Channel Id: undefined
Length: 192min 27sec (11547 seconds)
Published: Wed Feb 10 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.