Transcriber: Begum Dirik
Reviewer: Mahmoud Mohamed Ahmed (Music) (Music) Hi, I’m Jeremy. I’m a research scientist at MosaicML. I work on making large language models
cheaper to train. I've been working in natural language
processing for about eight years. The pace of improvement has really
just completely blown me away. That said, I think artificial general
intelligence is quite a ways off. I’m going to talk about a couple of things
that, biological, a couple of ways that biological intelligence has an edge
on current state of the art AI systems. The three things I’m going to talk about
are intelligent weight initialization. Essentially, humans aren’t a blank slate
at birth, but AI is. Complex reward signals versus
simple single objectives. Humans have a complex reward system. We have complex desires, complex aims
that help us navigate the world and reason about it causally. Whereas AI uses simple methods, and social intelligence. Soociety collectives whether it’s an ant
or a human, the, you know, a stock marketplace or a flock of birds. Collectives are more intelligent
than any one individual. AI still hasn’t, still hasn’t cracked
the code for collective intelligence. So, I’m going to talk about
intelligent weight initialization. The human brain at a high level
essentially consist of a bunch of neurons. They connect to one another
via connections called synapses. The weight between these neurons
essentially modulates how these how the neurons activate one another. And it encodes our capacities,
our intelligence, our knowledge. Human brains and brains of other animals,
flatworms, horses, are formed at birth with weights and connections
that enable us to quickly learn. These initializations have been passed on
through evolution, through our genes, for billions of years. And, it enables us, enables horses
to walk as soon as they’re born, enables humans to quickly
rapidly acquire languages as soon as they see their parents
speaking it to them as children. However, in AI, network weights
are initialized randomly. This makes AI learn slowly. It makes training very data intensive
and it makes their generalizability and their robustness far less
compared to biological intelligence. [Towards AGI], by studying intelligent,
biologically inspired Initializations figuring out what the building blocks
and motifs are of generic intelligence. Could we build AI models that show
innate intelligence and learn rapidly? You know, they say that OpenAI spent, you
know, many, many months to train GPT3. The, you know, it took,
it’s almost the entire Internet. Whereas human children, they acquire
acquire language very rapidly. In the future, AGI will need to have
intelligent initializations in order to accomplish this. Let’s talk about complex reward signals. Our brains come pre-installed
with an intelligent reward system. You know, you don't need to learn
that sugar has nutrients cause your reward system
tells you that it tastes good. You know, you innately find it. You know, you feel proud when
you do well on a test. And you feel pleasure when you see
when you feel social approval from those around you. This reward system is passed on
through our DNA. It’s an, innate, innate part of our brain
that we didn’t need to learn. In AI, we train our networks
with very simple reward functions. These reward functions, for example,
in language models we simply train the model to maximize the probability
of the next word conditioned on the words that came before it. This reward system doesn’t encode
anything meaningful about the complexity of our universe. [Using complex reward signals,
could we design AI, that has a richer understanding
of the world?] Finally, I'm going to talk about
social intelligence. Society is more intelligent
than any one individual. Biological intelligence, whether it’s
dogs or humans or butterflies or anything learns from interaction. Furthermore, biological intelligence
self-organizes into collectives. With these collectives form
emerges forms of intelligence that the single individuals don’t contain. Large AI models, for the most part,
have not mastered learning from social interactions. AI researchers haven’t really figured out
how to create the state of the art intelligence and collect them
into collective forms of intelligence. Can we interconnect AI systems
like ChatGPT so that they learn from one another
and display collective intelligence? Thank you.