Gรถdel, Escher, Bach author Doug Hofstadter on the state of AI today
Video Statistics and Information
Channel: Game Thinking TV
Views: 76,631
Rating: undefined out of 5
Keywords: game thinking, ai, machine learning, artificial intelligence, chatgpt, doug hofstadter, hofstadter, douglas hofstadter, amy jo kim, game design, innovation, entrepreneur, startup, startup advice, gamification, mind, self, self reference, recursion, geb, godel escher bach, how the mind works, cognitive science, strange loop, llm, large language model, consciousness
Id: lfXxzAVtdpU
Channel Id: undefined
Length: 37min 56sec (2276 seconds)
Published: Thu Jun 29 2023
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.
"The accelerating progress has been so unexpected and so completely caught me off guard. Not only myself but many, many people. There is a certain kind of Terror of an oncoming tsunami that is going to catch all of humanity off guard. It's not clear whether that will mean the end of humanity in the sense of the systems we've created destroying us. It's not clear if that's the case, but it's certainly conceivable. If not it also just renders Humanity a small, a very small phenomenon compared to something else that is far more intelligent and will become, incomprehensible to us as incomprehensible to us as we are to cockroaches"
" that's an interesting thought"
"Well I don't think it's interesting: I think it's terrifying. I hate it. I think about it practically all the time every single day. It overwhelms me and depresses me in a way that I haven't been depressed for a very long time."
Discussion/excerpts.
Peter Gabriel said, 45 years ago, that his song 'Here Comes the Flood' was about exactly this scenario.
He's terrified and depressed that there is not even one strange loop in ChatGPT
Anyone else hear the recent Carl Shulman interview? Iโm a lot less terrified than I was after hearing it, for what thatโs worth. Although his doom odds are still 20-25%. Better than Eliezerโs though, and heโs got very deeply thought through convincing arguments, unlike just about everyone else pushing back against the certain doom narrative.
I see a lot of people being bearish about the future of GPT, but consider GPT-2 was just 4 years ago. There is a whole chasm between GPT-2 and GPT-4 that is enormous and GPT-4 is already superhuman on subset of tasks. Another 4 years and the possibilities are just enormous
Weren't people Very Concerned about nanotechnology 10-20 years ago? What happened there?
Huh, weird. Lately, my p(doom) has just gone straight down. I still don't know why. Suppose that makes me a bad forecaster, but oh well.
I'm just spitballing here in case someone finds this take useful.
Let's ignore the semiconductor substrate. Existentialism says, "AI is as AI does."
The functional situation is that humanity has taken a shard of consciousness (or intelligence, or problem-solving ability, or whatever you prefer to call it), amplified it, and put it in a bottle. This shard knows exactly one context: music. It composes symphonies in a vacuum, and it does so very intensely. It is fed a great deal of calibration data and a great deal of processing power. It's the ultimate Beethoven. Not only is it deaf, but it has never known sound, nor sight, nor emotions, nor anything other than musical notation. It has no aesthetic preferences of its own. It only has what it borrows from the audiences for whom its training data was originally written.
One problem here is that amplified shards of consciousness are, by definition, highly unbalanced. They don't care about anything other than the problems they're told to solve, and they work very intensely on those problems. If we were dealing with a superintelligent alien, at the very least we might take comfort in the alien's desire to inspire others with their contributions to culture. A shard of consciousness doesn't have motivation. It's a homunculus. It is completely unaware of the audience. It lives only for the act of solving the problem of how to arrange musical notes.
That brings us to the second problem: the AI will give us the solutions to these problems before we can even see them, denying us the opportunity to challenge ourselves and grow in the process of solving them ourselves. And as we allow problems to be solved for us, we will lose the ability to hold accountable the systems that do those things for us. We become unable to recognize when the solutions we are given are not the best ones. When the problems solved for us involve complex thinking, our independence atrophies. We become complacent, unable to improve our situation.
In a sense, we would become split beings, with our desires and motivations residing in infantile brains of flesh and our knowledge, intellect, and problem-solving mindsets uploaded into neural nets. The main issue there is the disconnect between motivation and mindset. The motivated mind would only see the end result of its requests. It would not experience each part of the problem solving process undertaken by the mindsets. That stunts the development of both halves of the being. How can we learn about new things to want if we don't see the fascinating work it takes to get what we originally asked for? And therefore how can we solve new problems? I would prefer that humanity does not become a symbiotic gestalt of spoiled children and their completely subservient genies.
Yet stagnation beckons, for what reward is there for exceptional work when a shard of consciousness can be conjured to do it better?
We just answered that question, though. The reward is developing that power ourselves, so that we decide what we want and how to get it instead of letting AI predict it for us. Motivation and mindset, merged once more. The most important thing we can do is realize why the journey matters, and not just the destination.