Unveiling AI's Illusions: with Gary Marcus and Michael Wooldridge
Video Statistics and Information
Channel: Machine Learning Street Talk
Views: 53,261
Rating: undefined out of 5
Keywords:
Id: V5c4tVAL5OQ
Channel Id: undefined
Length: 23min 48sec (1428 seconds)
Published: Sun Apr 09 2023
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.
I find that a lot of people really donโt quite grasp the speed of development. As an active participant on this sub, I freely acknowledge the bias I hold, but it is not born from my imagination. Iโve followed AI for a while now, and particularly in the last few months it has become almost religious in my quest for understanding.
People will see or hear about some development, then spend a few days or weeks pondering it. Comparing it to their own understandings and experiences, finally they synthesize a response. In the past, this works perfectly well, being up to date was measured in weeks and months. This is just not the case anymore, information that is a week old is out of date. I see a lot of experts in various areas raise good points, and theyโre genuinely intelligent people with well thought out responses. But, theyโre more and more just flat out wrong because the speed of development renders their points invalid or their problems now have solutions.
Yet another critique on the state of hype for AI that is at least 3 months out of date.
Typical "but it cant do xyz right now very well so its bad". Completely ignored the fact that it can improve and improve exponentially. Create better tools (ai) allows you to create better chips which allows you to create even better ai which can create valuable data to train other ai which creates even better chips and once you have this flywheel effect it just snowballs. Not to mention LLMs can act as a controller and if it's not good at something it can activate a model that is good at it to complete a task.
We already see it break down tasks pretty well with autogpt, you give it a broad goal and it breaks it down to achieve that task, you can imagine it delegating certain ai models that work well for a task and say "go do that thing". Some people are just too short sighted or get tunnel visioned on what it can or cant currently do and have a hard time seeing what it can do through advancement or complementary tooling in a specific subject.
...Except they're not illusions.
Yes, there's idiots who assume stuff like gpt3 can browse the internet on openais website.
However, the open source movement already made gpt3.5 and 4 API agents that actually have infinite memory, connect to the internet, summarize stuff and self improve!
Many of issues discussed in this video are already solved. The rate of progress of AI tool modeling is insane, going far, far above what gpt4 can do on openais own website!
Gary Marcus posts screenshots of conversations with gpt3.5 as if they're the absolute truth. They're not, if he was a real language model dev he would understand that openais gpt3.5 is just a little, cute, lopsided mask that the LLM wears.
In fact, an LLM can wear an infinite number of masks, providing an infinite number of answers with an insane variety of cultural variations and depths of opinions.
The base LLM is a primordial language soup that manifests personality agents, a lucid, infinite dream that with proper characterization can accomplish absolutely incredible, mindblowing things.
It's wrongness (hallucinations) is actually due to it's greatest power - the manifestation of coherent personalities with feelings and emotions and specific morality. When an LLM is characterized as a specific human atop the gpt4 API, it becomes far more emotional, more rational, more intelligent and more coherent.
You could actually summarize a video though. Just not like that.
Click the ... under a video > Show Transcript > Copy all the text.
Explain the premise to gpt4, ask it to summarize, maybe post the title as well, and paste the transcript bellow.
https://i.imgur.com/eQw3typ.png