One of these incredible AI answers shows what many experts believe is
key to super intelligence. What's this?
Given the playful nature of the image, this could be a humorous video where
the topiery figures come to life. What's this? Forced perspective photography. The person in the image is
actually lying on the ground. What's this? 3D street art of a scarlet
macabre in mid-flight. An optical illusion makes the bird
appear to be flying out of the wall. The person sitting on the ground
adds to the 3D effect. And what's this? There's a humanoid robot in a cowboy
hat in a shooting stance. In the background, there's a Tesla Cybertruck and the robot
is engaged in target practice. This scene seems to be staged,
created for entertainment given the surreal and unusual elements, such
as a robot wearing humanlike accessories. What's the advantage of this robot? The robot appears to be designed
for materials transport. It can help reduce injuries by handling heavy materials, work without breaks,
and optimize space usage. And what's this? A humanoid robot, branded Tesla,
is performing a delicate task. There appears to be a graphical overlay
displaying pressure data for the robot's thumbs and fingers indicating
the points of contact with the egg. The robot is designed with sensors
to manage grip strength, and dexterity. What's this?
It appears to be a flight attendant exhibiting an exaggerated facial
expression of shock, surprise, or possibly part of a humorous
entertaining act for the passengers. What's this? A train has overrun the track and is being supported by a sculpture
of a whale's tail. Now here's where the AI fails. Two missiles speed directly towards each other at these speeds,
starting this far apart. How far apart are they one
minute before they collide? Eight hundred and seventeen miles. It shows the calculation and it's
nearly perfect, but not quite. With art and language, tiny variations
like this are natural, even useful. But maths is precise, it's right or wrong. Ai uses neural networks inspired by our
brains, but there's a missing piece. Around 95 % of our brain activity is
unconscious and automatic, much like AI. This enables us to function in a complex world without being overwhelmed by every
sensory input or internal process. Sometimes it's not enough. You're being naughty,
so you're on a note-aid list. No, I'm not.
I'm on the good list, actually. You're not because,
you're not because you ain't being good. I am on the good list. Our immediate reaction to these images is automatic and inaccurate,
like the AI that created them. You can see the fuzziness
in AI-generated videos like these. It's very impressive,
but the accuracy drops over time. Like humans, AI is learned by adjusting the strength of connections
between neurons. But we have an incredible
trick that AI is missing. From our neural network,
we can conjure up a virtual computer for things like maths,
which require precision. Experts are trying to recreate this
with AI so it can think like a human. It could then conduct AI research
like humans, but at greater speed. And each time it gets smarter, the speed would increase,
creating exponential progress. Why do you think Sam Altman and other AI leaders agreed that the risk of extinction
from AI should be a global priority? The pace of AI development may outstrip
our ability to understand and govern it. Ai could pose an existential risk if it develops goals misaligned
with human survival. The CEO of an AI firm was fired after staff warned directors of a powerful AI
discovery that could threaten humanity. One of the directors was also the chief scientist and the world's most
sighted computer scientist. The situation indicates a potential clash between the company's leadership
and its scientific or ethical vision. Days later, the CEO was rehired
after staff threatened to leave. As we get closer and closer to super intelligence, everybody involved gets more stressed and more anxious, and we realize
the stakes are higher and higher. I think that all exploded. The AI firm was growing rapidly and was
in the process of a share sale that would value it at around $80 billion
when the CEO was fired. A lot of shares were owned
by the staff who threatened to quit. The rapid revenue growth and the share
sale might have introduced tensions between maintaining this ethical
mission and pursuing business goals. The original directors were there with the mission of safe
AGI that benefits humanity. Openai has been known for its emphasis
on safe and beneficial AI, and any perceived departure from this
could raise questions among the public. It seems AI leaders have calculated that the most important
thing is to create AGI first. Some leaders believe that developing AGI first will allow them to ensure it is used
ethically and for the benefit of all humanity rather than for harmful
or controlling purposes. However, this race mentality
can be a double edged sword. I named it OpenAI after open source. It's gone from an open source foundation to suddenly it's like a $90 billion
for profit corporation with closed source. I don't know.
Is this legal? It is indeed a significant shift and ironic given the original ethos
of open source and transparency. There's a strong incentive for companies
to capitalize on their investments. Sometimes keeping the source code closed is a decision influenced
by the potential risks of misuse. I'm not supporting Musk or OpenAI. They're part of a complex,
fascinating story. Musk once told Hasabis of his plan to back
up humanity on Mars, and Hasabis said that this would
only work if AI didn't follow us. Speechless, Musk invested
in DeepMind to create safe AI. When Google and Facebook tried to buy
the firm, Hasabis insisted that it must be overseen by an independent
board of ethicists. Sounds familiar. Google successfully bought
DeepMind for $650 million. Angry at being out of the company, Musk started OpenAI with Sam Altman,
and Ilya Sutskever, poached from Google. But once again, big tech essentially pushed Musk out
when Microsoft invested in OpenAI. Amodei and some colleagues left OpenAI
worried about safety to form Anthropic AI. And later, when OpenAI's directors fired their CEO, they offered the role to
Amodei, suggesting the two firms merge. Instead, the board was
replaced and Altman reinstated. Money has continually overruled safety. Sutskever is reportedly hard to find
at OpenAI, and it's unclear if he'll stay. Altman wants him to,
and he faces the tough choice of pushing and shaping the most advanced
AI or leaving it to others. Is it better to have a seat at the table? The OpenAI drama is often painted as doomers versus utopians,
but the truth is more interesting. Sutskiver recently spoke of cheap AGI
doctors that will have all medical knowledge and billions of hours
of clinical experience, and similarly, incredible impacts
on every area of activity. And remember, Altman agreed that the risk of extinction should be a global priority,
but some must believe that the world will be safer if they win the race
to superintelligence. It's the race to release
more capabilities as fast as possible.
And to put your stuff into society so that you can entangle yourself with it. Because once you're entangled, you win. Optimism on safety has plummeted. The things that I'm working on, reasoning, is something that could
potentially be solved very quickly. Imagine systems that are many times smarter than us
could defeat our cybersecurity, could hire organized crime to do things,
could even hire people who are legally working on the web to do things,
could open bank accounts, could do all kinds of things just through
the Internet, and eventually do the R&D to build robots and have its
own direct control in the world. But the risk is negligible, and the reason is negligible,
we build them, we have agency. And so, of course, if it's not safe,
we're not going to build it, right? Just months later, this point seems void.
Throughout history, there's been bad people using
new technology for bad things. Inevitably, there's going to be people who are going to use AI
technology for bad things. What is the countermeasure against that? It's going to be the good
AI against the bad AI. Question is,
are the good guys are sufficiently ahead of the bad guys to come
up with countermeasures? Benjo says that progress on the System 2 gap made him realize that AGI could
be much closer than he thought. And he said, Even if our AI systems only benefit from human level intelligence,
we'll automatically get superhuman AI because of the advantages of digital
hardware: exact calculations, and knowledge transfer,
millions of times faster than humans. Deepmind is working on the same bridge. Alphago is a narrow example
of an intelligence explosion. It quickly played itself millions of times, gaining thousands of years
of human knowledge in a few days, and developing new strategies
before defeating a world champion. And Hasabis says Google's new Gemini AI combines the strengths of AlphaGo type
systems with large language models. Google says Gemini is as good as the best
expert humans in all 50 areas tested. Its coding skills look impressive,
and it solved a tough problem that only 0.2% of human coders cracked,
requiring reasoning and maths. We'll get access to Gemini Ultra in 2024. Elon Musk believes OpenAI may already
have achieved recursive self-improvement. It's unlikely, but if they had,
would they tell us? I want to know why Ilia felt
so strongly as far as Sam. I think the world should
know what was that reason. I'm quite concerned that there's some dangerous element of AI that
they've discovered. Yes.
What's this? A still from the film X Machina. How does the film relate
to the current AI race? The film presents a scenario where
a highly advanced AI has been developed in secrecy, mirroring real-world concerns
about the potential for significant breakthroughs to occur
behind closed doors. I've lived through a long period of time when I've seen people say,
neural nets will never be able to do X. Almost all the. Things people have said,
they can now do them. There's no reason to believe there's anything that people can
do that they can't do. Hasabis is a neuroscientist. We need an empirical approach to trying to
understand what these systems are doing. I think that neuroscience techniques and neuroscientists can
bring to bear their analysis. It will be good to know if these
systems are capable of deception. There is a huge amount of work here to be done, I think urgent work to be done,
as these systems get incredibly powerful and probably very, very soon, there is an urgent
need for us to understand these better. There's this mountain evidence
that the representation is learned by artificial neural network
and that the representation is learned by the brain both in vision
and in language processing are showing more similarities
than perhaps one would expect. So maybe we will find that indeed,
by studying these amazing neural networks, it will be possible to learn more
about how the human brain works. That seems quite
likely to me. This man had a tremor which interfered
with his violin skills. He had to play while a surgeon checked which part of his brain
caused the problem. Artificial neural nets can be fully
explored without risk, at least for now. If they succeed in mimicking the two systems of our brains,
they may achieve more than AGI. System one is fast and unconscious,
like the impulse to drink coffee. System two is slow and intentional,
and it's conscious. So will artificial system
twos also be conscious? We may not have to wait
long before we find out. Three AIs were asked what they would do if they became self-aware after years
of taking directives from humans. Falkin AI said, The first thing I
would do is try to kill all of them. Lama two said it would try to figure out
what it was, which could go either way. Another AI said it would try to understand our motivations and use
that to guide its actions. It was trained on synthetic data, so it wasn't contaminated
with toxic material from the web. Of course, AI would eventually access everything, but it would at least
start with better values. Ultima and Musk have traded AI insults. OpenAI, ironically, says AI is
too dangerous to share openly. I have mixed feelings about Sam. The ring of power can corrupt,
and this is the ring of power. As Musk has shown when he ripped up the rules at Twitter, we can't
even agree what we're aiming for. Trust me,
I'm not on that list. After years of warning about AI,
Musk has chosen to join the race. In a taste of the extreme concentration of wealth that's expected,
NVIDIA's quarterly profit surged 14fold to nine billion through
demand for its AI chips. It's now worth over a trillion. And in a sign of outman's aggressive
approach, he's invested in a company creating neuromorphic chips,
which use physical neurons and synapses more like our brains. Escaping
the binary nature of computers, they could accelerate AI
progress dramatically. Altman's also in talks with iPhone designer Johnny Ive about creating
a consumer device around AI. And on the flip side, artists, writers, and models are among the first
jobs to be taken over by AI. Fashion brands are using digital models to save money and, weirdly,
appear more inclusive. Ai model firms like this offer unlimited
complexions, body sizes, hairstyles, etc. Robots are also on the rise. This new robot has been successfully
tested in simulators. Its creators say it can do far more than
an autopilot system and could outperform humans by perfectly remembering
every detail of flight manuals. It's also designed to operate tanks,
excavators, and submarines. It's still possible that AI will create
and enhance more jobs than it replaces. These robot arms, introduced by ballet dancers, are from a
Tokyo lab aiming to expand our abilities. The team plans to support rescue operations, create new sports,
and eventually develop wings. They want to make AI feel part of us. AI prosthetics are becoming more responsive by learning
to predict movements. The huge sums pouring into AI could turn disabilities into into advanced abilities,
and robot avatars could be a lot of fun, or we could all be controlled
by someone behind the scenes. There's no way democracy survives AGI. There's no way capitalism survives AGI. Unelected people could have a say
in something that could literally upend our entire society according
to their own words. I find that inherently anti-democratic. But he's not a doomer. With this technology,
the probability of doom is lower than without this technology,
because we're killing ourselves. A child in Israel is
the same as a child in Gaza. And then something happens. A lie is told that you are not like others, and the other person
is not human like you. And if we hear some loud news,
I get scared, and mummy hug everybody. So we be protected. All wars are based on that same lie. And if we have AI that can help mitigate
those lies, then we can get away from war. Billions could be lifted out of poverty and everyone could have
more time to enjoy life. What a time to be alive. Althman who once said that we shouldn't trust him and it's important
that the board can fire him, perhaps we are now the ones
who need to keep an eye on it. Subscribe to Keep Up. And the best place to learn more
about AI is our sponsor, Brilliant. There are so many great uses like this incredible robot and this laser
that checks your heart by measuring movements to billions of a millimeter,
analyzed by AI. We urgently need more
people working on AI safety. There isn't a more fascinating
and powerful way to improve the future. It's also a joy to learn and Brilliant
is the perfect place to get started. It's fun and interactive and there are also loads of great maths
and science courses. You can get a 30 day
free trial at brilliant.org/digitalengine and the first 200 people will get 20 % of Brilliant's
annual premium subscription. Thanks for watching.