So at this point, I'm going
to introduce our next panel. And we've invited
three colleagues to come and share their work. And so I'll just say
a little bit about-- I'll introduce all
three of them together and then we'll invite them up,
we'll hear from each of them, we'll have a conversation,
and then we'd love to get you involved and
hear your questions as well. So first, we will hear
from Professor Patti Moss who's Head of the
Fluid Interfaces Group here at the Media Lab. She's a long time MIT faculty
member, and as many of you know, she's a leading light
in both human computer interaction, but
also, more broadly, how we think about our
relationships with technology. And for that, she's won
a whole series of awards, including a Netguru
Award as a Hidden Hero who is shaping the
future of technology. Fast Company named her as one of
50 Most Influential Designers, and she's someone who always
seems to be a little bit ahead of where the world is. And I have to say, Patti,
right now, like many people, I'm struggling with sleep. And so I'm really
looking forward to you fixing that and your
group fixing that soon. Maybe we'll hear a
little more about that. Secondly, we're going to hear
from Joshua Bennett, who's a professor here in
literature at MIT. He's also a Distinguished
Chair of the Humanities. An absolutely incredible poet. Those of you who were
at the opening session on Tuesday, we were
taken on a journey that Joshua created an original
poem for the kickoff of MIT Gen AI week, but also his
delivery of that poem was absolutely astonishing. His work's been published
widely in The New Yorker, and The Atlantic, and
several award-winning books of poetry as well. And he's, of course, in constant
demand to share his poetry. So that includes
performing at the White House for an evening
of poetry and music that the Obamas sponsored. And then finally, we'll
hear from Pelin Kivrak who is a Senior Research Associate
with Refik Anadol's studio. And again on Tuesday, we had a
keynote where Refik kicked off by showing us the
possibilities in art and especially some
of the new interfaces and interactive environments
that his studio has been creating. And Pelin is trained
classically in the humanities at Yale and Harvard. She's an author. She's also teaching at
Tufts University nearby, but somehow, in addition
to being a scholar, she's also working on these
incredibly complex art pieces at scale all over the world. So we're going to have a
chance to lift the hood and learn a little
bit more about that, and also where the future is in
relation to art and creativity broadly. So please join me in welcoming
all three of our panelists to the stage. [APPLAUSE] Now, there's so
much to talk about, but why don't we
just kick things off by hearing from each
of you individually? And so Patti, please come on
up and share your thoughts with the group. Hi, everyone. Pleasure to be here. I want to start-- actually, I need the clicker. That's one thing. I want to start by asking
all of you a question. Do you see the glass
half-full or half-empty when it comes to AI
and human creativity in an era of abundant AI? So who's on the half-full side? And who's on the
half-empty side? A few-- well, the half-full
ones, as usual, I think at MIT see the future with technology
as a little brighter. Personally, I think that our
future that AI, generative AI will unleash a wealth
of human creativity. Not just what people are
already doing today generating text, images, code, but also
entire apps, videos, 3D models, printing them into objects,
creating sounds, music, new drugs, new materials,
new buildings, new cities, animated characters, new
chat bots, AI agents, and entire new worlds
and experiences. In fact, one of my students
who you met earlier, Valdemar Danbury is
doing an installation at the Contemporary Art
Museum in Brussels, Bozar, called Be My Guest where
the entire experience of a dinner with a number of
people will be created by AI. The plates, the food-- in fact, the host
at the dinner table will be an AI bot listening
to the conversation and more. The music, everything
is AI-generated. So we can imagine things
and then describe them and realize them. It's incredible. We can-- unfortunately, it's
not always yet working properly. I was trying to make a
glass half-full last night with DALL-E and I just
could not get there. And I kept saying, lower
the amount of water. Give me less water in the glass. It should be half-full. And it kept insisting that it
was much less than half-full, even though it looks like that. So clearly we still
have some bumps in the road, some
unresolved issues. No real understanding,
clearly no reasoning. Hallucinations. These systems are
always very full of convincing and confident,
but not always right. They have biases built in. The rights of the
original human creators are not always being respected. Regulation and oversight
is still non-existent. Legal issues aren't resolved. And last but not
least, there will be the hardest
one to solve, they have a huge cost
on the environment. So nevertheless, we can go today
from creating new molecules to creating entire
worlds, but I think that what is sometimes
the hardest for people to imagine and realize
is reinventing themselves changing ourselves,
changing our attitudes, changing our confidence, our
motivation, all of those softer skills. And that's actually what
I've been working on in my research group. So one of the projects, for
example, is using deepfakes. A deepfake of a user
themselves to help them imagine how they can be
a more confident speaker. So you can decide who your
favorite role model is. Alexandria Ocasio-Cortez
or whoever. You upload your own picture
and then you see yourself-- What a lot people-- --talking like your role model. And we've done
studies in our group. We do these studies with large
numbers of people showing that when people see
themselves talking confidently, they feel more
confident themselves, and it actually changes their
ability, their own ability to speak confidently. Similarly, we've done
this with creativity. Sometimes this is talked
about as the Proteus effect. Often we limit ourselves. We don't realize our full
potential because we cannot imagine ourselves as
confident speakers, as creative individuals. So we've been doing experiments
where we actually turn people into a child version
of themselves or into a crazy inventor
version of themselves, and they actually come up
with more creative ideas when they're a child or
an inventor, and then they realize, that
was me, that was me. I am that creative person. And it can actually unleash some
of their own human creativity. Pat's already talked about his
project machine of Multiple Me, together with Vald
Danry where you can get wisdom from other
versions of yourself. Like, what if I was
a little bit more older and mature
like the advisor? Or maybe a little
bit more feminine? Not just in the way I look, but
more importantly, in how I talk and what my opinions
about things are. So in that project, he
analyzes all the social media posts of an individual,
and he can actually bias them and change
them to be more like older or more
feminine, et cetera-- or it could be more left
wing, more right wing, whatever you want, to
explore alternate selves and get input and wisdom
from other views, basically. Related to that,
he did a project to help people imagine
their own future. This is very hard, I
think, for young kids to think-- and for all of
us-- to think long-term, to act in our
long-term interests, not just for ourselves,
but also for the planet, of course, et cetera. So he's been building a
system called Future You where you create an
older version of yourself and you say what you
want-- what you think you want to accomplish and what
your situation is, et cetera. And then it creates this older
you that you can chat with and you can ask,
well, if you say, I think I want to become
a biology teacher, then you can talk to
your future self and say, do you think that worked
out being a biology-- or having chosen that
profession of a biology teacher? What are the good things? What are the bad things? Et cetera. And this is what ChatGPT and
these large language models are so good at. They have all this
information out there about people and
their experiences that you can learn from. So we show with Hal Hershfield,
a psychologist at UCLA, that this actually changes
people's attitudes and behavior towards the future. We're doing a future
jobs for 18-year-olds where they can imagine
themselves and talk to a future self that has
a particular profession. And last, we're
going beyond this by enabling people
to talk to AI agents to rehearse difficult
conversations, to practice conflict
resolution, et cetera. So this is a Pat together with
a student from Hiroshi's group called Daniel Pillis. They are building this
system where you can rehearse a difficult conversation. Maybe it's coming out
as gay to your parents, or, how do I deal with conflict
between two colleagues? How do I talk to someone who has
very different values than me? And you can rehearse
that and practice that with an agent or
multiple agents playing roles, particular roles,
like the role of your maybe conservative parents
or something. And you can learn
from that experience how to engage in
these conversations. So for me, the
glass is half-full, similar to what
DALL-E seems to think. There only half-full
glasses when it comes to unleashing
human creativity with AI and really reimagining
our world and ourselves. Thank you. Thank you, Patti. [APPLAUSE] Wonderful. Please, Joshua. Of course. How are y'all doing today? Y'all all right? Solid. I come from a performance
poetry background, so you always got to do a
temperature check in the room before you say anything
on the microphone. So thank you again to my
colleagues on the panel for the invitation. My name is Joshua Bennett,
I'm a poet, a literary critic, and as of four weeks
ago, a father of two. So if anything-- Whoo! Wow. That's incredible. [APPLAUSE] It really is a great
vibe here at MIT about this celebration
of new life. So if anything I say
here is a bit blurry, it's because the
past couple of weeks have been a blur in
the best possible way. So here at the
Institute, I'm primarily a teacher of both
literary criticism and the literary arts, and so
I hope my talk today really reflects those twin impulses
and longstanding commitments. In that spirit, I want
to open with an epigraph from one of my favorite writers. "Every sound we make is
a bit of autobiography." It's from the Canadian poet
and translator Anne Carson. Act 1, property. So this talk began as a
telephone conversation with my literary agent
Nate about a new book we'd been working on
together, a cultural history of Black prodigies
across the world. Nate mentioned that he was
finalizing our contract with the publisher and that they
had just added a no-AI clause to it earlier that week. No doubt hearing the confusion
embedded in the half-beat after he uttered this phrase,
Nate then clarified a bit. Essentially, the agency had
argued for additional language in the contract to ensure
that no AI software could be used to record the
audiobook for this latest project in my place. After this
conversation with Nate, I decided to figure out
how other authors were managing these
sorts of questions around AI and authorship. During that search, I came
across the following article in The Atlantic. These 183,000 books are
fueling the biggest fight in publishing in tech. So embedded in this article,
as you can see right there, is a search tool
that you can use to find out which
specific books have been used as training data for
Meta's large language models. Naturally, I searched for the
names of a handful of writers I know, and then,
obviously, my own. And there it was. My first book of poems written
in graduate school of all places as I was
sleeping on a futon. The Sobbing School,
used to train in LLM without my knowledge
or permission. In that moment, I wasn't
exactly sure how to feel, but I soon realized
that I needed a more robust
historical frame to help me better understand and
ultimately contribute to the conversations
now taking place in my community of writers. Some way to help us navigate
this new environment where we were discovering
that our work had been used in this strange
and unexpected fashion. And the dominant framing of
this discourse, after all, AI is often imagined as a
cheaper, more efficient option for companies interested
in literary text as a saleable commodity. Here, there is no mandate
to pay for studio time or depend on the labor of audio
engineers and voice actors. No need to account for an author
showing up late to a recording session or else going
through multiple takes to perfect a reading. Only the faintest echo of
a human element remains. Fittingly, in my
work on prodigies, I'd already been thinking
about this larger question of the
human voice, not only as a part of one's personhood,
but as a site of real social and political struggle. I was already
writing, for instance, about the enslaved teenage
poet Phillis Wheatley who, in October 1772, was asked
to sit before a panel of 18 lawmakers and scholars
right here in Massachusetts, each of whom was tasked with
determining whether it was truly possible that she had
composed the poetry published under her name. They simply couldn't
imagine, at least at first, that she had produced such
a luminous literary voice. I want to mention,
too, if you notice, this book was published
in London the year after. And if you look
at those earliest reviews of Wheatley's
book, there's this tension built into them. They say, well, if she
can write so beautifully, how can she be enslaved? If we know that she has
this rich interior life and she's not just
a machine, how can it be possible that we
keep up this global system? OK. And then, of course,
there was Aretha Franklin. And if you ever see
anybody make that face in front of a microphone,
it's about to go down. And Stevie Wonder, my
father's favorite singer, both prodigiously gifted
vocalists since childhood, whose voices had been honed
by the institutions that raised them, places like
the New Bethel Baptist Church in Detroit, the
Michigan School for the Blind, and Motown Records,
all spaces that were grounded in some
sense by the idea that the humanity of the
people within their walls was not negotiable
and that each of them had something wonderful
to offer the world. And sharing this newest
work with y'all, then, I wanted to emphasize a series
of these sorts of vignettes throughout history taken from
the tradition I love and study and animated by this debate over
the human voice as an essential part of one's personhood. On this front, I have
three core questions. In what sense and in
what situations do our voices belong to us? What properties can be said to
constitute the content of one's own voice in the first place? And what historical models exist
to help us navigate present day debates around the
use of AI to alter, replicate, or stand in for
the human voice in the arts and entertainment world? Act 2, prodigies. Let's begin in 1963 with the
King versus Mister Maestro Incorporated case where
Dr. Martin Luther King Jr sued the 20th Century
Fox record corporation for selling recordings of
his "I Have a Dream" speech as a spoken word LP. And I should mention here
that King was also a prodigy. Went to college at 16 years
old for those of you who don't know, and
in the early 20s, moved here to Boston to
study at BU for seminary. So it's important to remember
that "I Have a Dream" was actually recorded earlier
this year in '63 at the March on Washington, which is
depicted here, but at that time, there was no federal
copyright protection for sound recordings. That became a reality
in 1972 following the passage of the Sound
Recording Act of 1971. In this era, only
one copyright was applicable to LPs, those
covering textual content, the words and nothing more. It also bears mentioning
that this kind of issue comes up almost 40 years
later when King's estate has to sue Columbia Records
in the case of The Estate of Martin Luther King versus
CBS, a legal dispute which emerges because Columbia refuses
to pay royalties to his estate after using "I Have a Dream"
in a documentary series, 20th Century with Mike Wallace. In the decision of King versus
Mister Maestro, Incorporated, the court found that
Dr. King had developed a unique literary
and oratorical style and that it seems unfair
and unjust for defendants to use the voice in the words
of Dr. King without his consent and for their own
financial profit. According to the court, then,
King's words and his voice are inextricable
from one another. They operate together
under the banner of style. And it is precisely this style
that's dance between text and audible sound that
makes the recording valuable as protectable
intellectual property. And quickly, I just want
to share one more vignette dealing with the Empress of the
Blues herself, Bessie Smith. So in the case of GE versus
CBS, Incorporated, the heirs of Bessie Smith, her adopted
son, and the executor of her late husband's
estate, William D. Harris, essentially tried to take
Columbia Records to court for the fact that she
never received a royalty payment in her entire life. This despite having sold
hundreds of thousands of records while she was alive. They also, after her death, had
been circulating rerecordings with her face on
the book jacket, and it was found that
basically her managers had been exploiting her for
the entirety of her life. It's a quotation from the
President of Columbia Records, when asked to address
this on live television, he essentially said, a
single royalty payment had been made to the
Bessie Smith Foundation and that the rest
of the money would be used on occasion to
pay for scholarships for needy Black students. Not repair, just
infrastructure, let's say. Act 3, promise. So in closing, I'm
curious about how we might create models
of not only compensation, but collaboration that honor
the spirit of the arguments put forward by this chorus of
ancestral American artists, but also contemporary
ones as well. What models might
me have already? Sampling, for instance,
which, however imperfect, emphasizes three principles
that I think are useful here. Crate digging, which is a
kind of archival exploration; clearance, going through
proper legal channels to gain permissions;
and collaboration, thoughtful connection
across time and space. Can we play-- press Play on
this tiny TikTok window, please? [LL COOL J, "ROCK THE BELLS"] (SINGING) --Cool
J is hot as hell. Battle anybody, I
don't care you tell. Hey, girl. [SPANISH SINGING] Does anybody recognize
these samples yet? I'll bury-- OK, we got it. Ugh, nasty! [KENDRICK LAMAR, "BACKSEAT
FREESTYLE"] (SINGING) A-ring-ding-ding,
a-ring-ding-ding, a-ring-ding-ding,
a-ring-ding-ding, a-ring-ding-ding. (SINGING) All my life
I want money and power, respect my mind. All right. So that, of course, is
"Backseat Freestyle" from none other than the
Pulitzer Prize-winning poet and MC Kendrick Lamar. OK. And if you know that song,
he also starts with the line, "Martin had a dream,
Kendrick has a dream." So there's a kind of double
citation happening here that I think is really beautiful. And ultimately, I think
there's a kind of sociality and togetherness built into
sampling that we can reflect back on to this moment,
because when we sample, when we riff and cite
and cover, we assemble an ensemble of the
people we admire and the beautiful
sounds they made. We built a home for them in
the present with the materials they left behind for us. We call their voices in that
they might lift us higher. Thank you. [APPLAUSE] Incredible. Thank you, Joshua. And please, Pelin. Hi, everyone. I'm here today as the senior
researcher at Refik Angeldal Studio. Refik, unfortunately,
had to leave last night to install our studio's most
recent artwork at the Climate Change Summit, COP28, in Dubai. But he sends his regards. And I have to say that after
spending two great days at this impeccable
conference, he had a really hard time
leaving last night. And no, I did not prompt
ChatGPT-4 to write this presentation
in his voice, but I will try to represent our
studio's collective vision of generative AI art
as much as I can today. I'm here as the person behind
the conceptual and academic research at the
studio, but I also want to add that I'm a
comparative literature scholar by training. And even though I
work at an AI studio where we use the most cutting
edge technological tools, I still write all
my notes by hand. So I'm eagerly anticipating
the discussions that will unfold in
this panel today. I'd like to start by
briefly introducing our art and research practice
at Refik Angeldal Studio in Los Angeles-- we're based in Los Angeles. And while I do that,
I'm going to start showing a five-minute
video that showcases most of our major works
from the past decade. I'd be more than
happy to discuss them in detail later if anything
sparks your interest. I've been part of the studio
since before its inception because Refik and I
started working together while we were college students. So I'm in a position to talk
about most of these artworks, so please feel free to reach
me after the presentation because today I
simply don't have time to go into detail even
though I really want to. So I'm going to start
the presentation. As a studio, we have
always been intrigued by the ways in which new
computational methods and artificial intelligence
allow for a new aesthetic to create enriched, immersive,
and dynamic environments. Our first explorations, as
our signature style shows, entailed a heightened engagement
with different softwares and data visualization
tools in order to transform data
into pigmentation and embed immersive
arts into architecture. Our creations navigate
the intersection of virtual and physical
spaces, fostering a symbiotic relationship
between science and media arts through AI and
machine intelligence. We've been pioneers in
collaborating with AI to create entirely new forms of
multisensory art using not only visual data sets, but
also sound and scent. Our commissions,
almost always created in collaboration with cultural
or research institutions around the world, have
been exhibited worldwide. Our data paintings
and sculptures, real-time performances, and
immersive art installations take many forms, while
encouraging the audience to rethink our engagement
with the physical world, collective experiences,
public art, and the creative
potentials of AI. What was once invisible
to the human eye, but still born out of human
or nature-centric data, becomes visible in our artworks. One could say a digital
sublime is created with almost overwhelming amount of data. For one of our most recent AI
data paintings Unsupervised at the Museum of
Modern Art in New York, we posed an alternate
understanding of Modern Art by transforming the metadata
of MoMA's collection into a work that
continuously generates new forms in real-time. It was recently welcomed
into the permanent collection of the Museum. For Walt Disney
Concert Hall Dreams, which you will see running in
the background, back in 2018, we used the century-long
institutional archives and recordings of
the LA Philharmonic to create visuals projected
onto the iconic building in downtown LA. While the data sets we have been
working with have represented diverse human
actions in designated urban public and
architectural spaces, we began experimenting
with nature-related data sets more during the pandemic. We began by collecting
publicly-available data sets of flora, California
landscapes, and corals, simply because we wanted
to connect more with nature and wanted to see how the
machine would interpret real pigments and
shapes found in nature. Over time and closely
following the advancements in generative AI, the
research part of our work became more and more embedded
in creating digital ecologies and ecosystems. So we have embarked on a
project that intertwines nature with the vast potential of
generative AI, a project that we call A Large Nature
Model, LNM, a venture that stands out from
other generative AI models in the way in which it
is based on visuals, sounds, and movements of nature. One side of our
research is deeply embedded in creating dialogues
between institutions that hold large nature data
sets and make them part of a generative AI
model to be able to see the previously unseen
connections in their archives. We're doing so by very
transparently crediting their research with the names
of all the scientists involved in the research. But when we started realizing
the dream of building this model, we were
also closely monitoring the ethical debates
around data collection methods and generative AI. And that productive
challenge inspired us to commit to a really hard,
but valuable methodological perspective, which is
to collect our own data set as opposed to using
publicly-available images that are not institutionally
or personally protected. Our team's dedication
to this ideal has led us deep
into the heart of 16 rainforests around the world. We have taken a
hands-on approach, scanning and collecting an
exhaustive range of species and nature images,
and sounds and scents. So with that note, I would like
to end my presentation with two simple discussions, open-ended
questions, or provocations, if you will, that
emerge out of some of the internal discussions
that we have in our practice. And I would love to
discuss them further if they resonate with your
creative practices as well. One of them has to do
with a slight modification of the phrase, using
AI to create art. I would argue that
what we're doing, at least in our
studio in LA, is using AI to see the world differently
and then create art. And this is not simply
to reduce AI to a tool, but to delegate it to a
multi-directional gray area where artistic creation
happens in the light of our various perceptions
of the world across time and space. And secondly, the digital
humanities digital humanities scholar inside me could not
help but do a distant reading of how many times the
word "trust" came up during the first day
of the symposium. But the close reader,
literary scholar inside me, almost wants to argue that our
creative interactions with AI could be the only place
where we could exercise a willing suspension
of disbelief, as we do when reading
fiction, for example, in order to derive pleasure
out of the process of engaging with an alternative reality
and recognizing its faults and imperfections in order to
shed light on our daily lives. Maybe that very human
pleasure, intertwined with a critical lens, is
something we can trust. Thank you so much. Wow. [APPLAUSE] Well, absolutely brilliant. This is exactly what I was
hoping each of you would do. And perhaps we could
begin with fostering a little bit of conversation
between the three of you. So any immediate reactions to
each other's talks or anything. I mean, you've each
given us a different lens and very important
questions you're raising. So would anyone like to
address someone else's talk or respond to any of the
other questions posed? Please, Patti. Maybe I can suggest a
topic to talk about, which came up in both of
your talks, namely where the data come from
and honoring and respecting who created that original
data that ultimately we're benefiting from. And I think, Joshua, you gave
a great example of how in music with sampling, it is-- it's more
like honoring people that came before by referencing them,
but I feel that we're not doing the same thing with AI or it's--
people don't know that it was based on your poetry, maybe
a poem that they generate and so on. So it seems that it would
be great to think about how we could not just respect
the creator's rights and not have their data trained
on and their artwork trained on if they don't want to
be part of it, but second, also giving reference and
honoring people and making it explicit whose art the
creations were based on, basically. Yeah. And that's a very fine
line because if it's a small piece that is
honoring, but if it's appropriating a large piece,
that feels like stealing. You lifted it, right? Lifted. No, but I love what you're
saying, though, because to me, it sounds actually-- not like it forecloses
collaboration, but that it's an opportunity
for collaboration. I mean, a number of us who
are circulating on Instagram these screenshots and we found
ourselves in the database, I mean, I think it was
a Janus-faced moment. On the one hand, it's like,
yeah, they stole my book. Like I need a check today. But there's also this sense
that, OK, well, I mean, if you look at that image
from the search bar, it's like Baldwin, Pynchon. I mean-- so there
actually is already this kind of editorial process
happening behind the scenes where they're trying
to train the voice of this large language model. And what I'm trying
to imagine is how do we all become
a part of that. If the technology is
going to proceed apace, how do we construct
an ethics around that and not let the tech keep
speeding on ahead of us before we answer these
foundational questions? And I think all
of us were getting at that in really interesting
ways at the level of imagery, too. And I love what you said about
the suspension of disbelief on this front. That's an ethical question,
I think especially for our children and
our young people, to teach them that
it's fiction and not a little person in the
screen talking back. Sure, yeah. I mean, going back to the
generative AI model being this mysterious space where
we cannot penetrate, as Caitlyn put it
aptly this morning, that was our initial reaction
to this idea of not being able to see where the
data is coming from. And this is very interesting,
but the first thing that we did when we
realized that we wanted to change that infrastructure
as much as we can at our studio was to simply call people
at research institutions and talk to them, going
back to that earlier modes of collaboration,
and it paid off. We started collaborating
with a lot of institutions across the US,
and we're building this data set with their help. And we're constantly in
touch as humans on Zoom, seeing each other, talking
about the data set, and that turned out to be
the most valuable aspect of building this
model, actually. Very good. And I think fundamentally
in all of this, there's a question about, as
Rod Brooks said in his keynote the other day, imitation
versus innovation. You're training on existing
models, existing data sets. Human voices, unique,
lived human experiences, and you cannot arrive at the
voice of Toni Morrison without being Toni Morrison. And yet today, a
high school kid can say, "Write my
college application essay in the voice
of Toni Morrison," and it can spit it
out immediately, and then our poor
colleagues and admissions have to try to figure
out what to do with that. And so I guess I'd
like to also ask about this question
of innovation. And Patti, I think
in your work, you've taken a wonderful example
for us because you use what models are good at
to project into the future. So you turn that into a benefit. But how do you think about that
interplay between imitation and innovation? Yeah. Personally, I think that
these models are not truly innovating, they are
interpolating, basically. Exactly. But I see all of these
AI systems as tools. I mean, ultimately, you still
have to give it a prompt, and for anyone here who has
played with these systems, it's actually really hard to
make them do what you want and you end up editing
things, whether it's in Photoshop or editing
the text or whatever. So it's more like
the AI is a seed. Whatever the system
comes up with is a seed that then the
person can respond to that. It's like co-creation,
and I believe that human plus AI can come
up with really novel things, but not necessarily
AI by itself. I'm getting very
tired of AI images by now because they
all look the same. It's so predictable. Yeah. So I think it will push
human creativity to a higher level where we have to create
things that where people say, wow, that's authentic. That's very different
from any of this AI crap. Yeah. Yeah. Patti, can you actually-- Please. Can you say a bit more
about human flourishing? It's part of what struck me so
much about your presentation, that that seems to
be clearly your end of the philosophical
debate about what it's for. Can you talk a
little bit about how you see that larger
debate developing from your sense of things,
both within your own team and beyond? I think that-- well, all of
us are very much influenced by, of course, our upbringing
and schooling and the family where we grow up and so on. And so, yeah, what I like
about AI, what draws me to it is that it can be a tool
really to re-imagine ourselves and to imagine
our possibilities. Like I feel that I
wasn't necessarily a super creative
person when I arrived at MIT, but being
in this environment, I started seeing myself as
a creative person, and then that-- you start then acting
that way as well. So I really believe
it can show people that they are not necessarily
stuck with whatever they grew up with and a very
biased society and so on that they can
see their own potential. That's one of the things
that motivates me. Yeah. And Michael was very clear
about that this morning. There's so much human talent in
the world that's not reaching its potential because
perhaps it cannot-- that 13-year-old cannot
see themself in that role, and it's part of all of our
duty to help enable that. Any other thoughts
among the panel? And we are going to open
it to the floor in just a few minutes, so please
have your questions ready, but any other thoughts
among yourselves? I could always, of course,
ask more questions, but-- I was actually thinking
about maybe your thoughts about this about
redefining creativity. Is it necessary? And where would you locate
yourself in that debate? Do we need to redefine
creativity now that we have new tools? Is it possible to be
creative without imagination? I know it's a big question. No, it's a good question. I don't know that
we know what it is. Yeah. Right? I mean, ask a poet-- We never knew. Yeah. Ask a poet where
a poem comes from. Yeah. WS Merwin would say it's
when a sequence of words begins to pick up an
electrical charge. It's very pretty, but
it's not totally clear. And it's because the
process itself is not clear. It's magic to us. Mm-hmm. If you talk to great
playwrights and singers, they'll tell you the same thing. Somewhat painful,
I think, at times. Oh, totally, yeah. My friends who are novelists,
they just lay down on the floor sometimes for weeks
at a time when they're in the throes of putting
the plot together, but that difficulty is
also part of the beauty, it's part of the dance. And so, I mean,
part of why I think I even wanted to
frame my talk that way was I think what we need is-- we need ways to figure out how
to marshal more materials, more supports to people
who don't currently have the material
resources to engage their creativity at full tilt. We need to figure out-- I mean, here at the institute,
I come up against this all the time. Students who say,
well, I don't actually know how to even get in the
mindset to write a poem. No one has ever asked me
to write a poem before. How do I get--
what are the rules? I spent four weeks on rules. Not the rules of a
poem, but getting around the discourse of rules
in poetry to say, well, when you just sit still in a
quiet room, what comes to you, trust that. Yeah. I do think that I will train us
to follow our intuition more. Part of-- I think it's
part of the system that is forcing us to listen
to ourselves more to decide whether something
is authentic or not what, it feels like to us when
we're confronted by it. So yeah. And I would really
like to think that it will help us to question
our educational models. Sure. And how are we-- how do you develop young
people's potential? And it's not memorizing
world capitals, necessarily, or learning dates of historical
figures, necessarily, but it's more learning
about lived experiences. And so that's very
compelling, Patti, in your lab's work and all
of the work that all of you shared. OK, well maybe at this point, we
could open it up to the floor. And if you do have a question,
we have two microphones here. So please come up
to the microphone, please introduce yourself. You've generated
so much interest. So please try to keep
the questions brief and we'll try to keep
the answers brief. Yeah, yeah, yeah. You don't all have to
respond to each question, but why don't we begin right here? Hey, everybody. Thank you so much for this. One thing that struck
me in your speech-- or all your talks was the idea
of this large nature model. Mm-hmm. And how you felt it was more
ethical to go and collect that data yourself. Mm-hmm. I'm just wondering
about-- all of you, could you speak to the idea of
how open data sets and maybe the idea of Creative
Commons may be changed or affected or impacted
as we think about creativity and using this information
and what we build? Well, I can start. As I said earlier, we were
mainly frustrated by the fact that the existing models
were not penetrable. Like we could not see the
workings of the model. And that was the intention
behind building our own model to begin with. As for the data sets, we've
been using publicly available visuals and sounds to
create some of the artworks that I showed you. And that idea became
something that we started questioning as well
with all the ethical debates that we've been reading. Because our research
practice not only focuses on generative AI
studies, but also ethical AI. So we've been reading a lot
about people's reactions to their works being
used to train a model, and we wanted to
offer an alternative by bringing in different voices
to help us build that model. And luckily, we
had opportunities to actually sponsor-- get sponsorships to
travel to those places. We're still building the model. We're not sure
whether it's going to be one of those
influential models in the end. I'm going to be really
humble here, but yeah. So in the process
of building it, we're really, really reflecting
on ethical data collection methods. And at this point, since
it's not public yet, it feels great when
we're working on a model to know that we physically
collected this data, but if that feeling
is going to turn into a movement or
an influential model, we don't know yet. Hopefully yeah. I think it's wonderful that
you've, with the studio, moved towards really collecting
your own data from scratch, but of course, that is also an
expensive, time-consuming thing to do. But I think one
thing that we should push for is for
all of these models that people use as tools
to be more open about what data things are trained on, what
data has gone into these models so that you can
know what to expect, what biases also you
can expect, and so on. And it's a bit
frustrating that not all of the big companies out
there, or most of them are very private now about
what data were used-- That transparency seems
critical, especially for academics. We have so many questions. I'd love to keep
moving if that's OK. We'll go to this side. Great session as part
of a great conference. So I just want to
push a little bit on this question about the
limits of property rights and the role of the commons. I'm reminded of the discussions
we had about 2000 with Lawrence Lessig and the Disney case
before the Supreme Court there, which you're really pushing
on the importance of commons of various forms in
cultural artifacts. So I just wondered if
you had any thoughts on how we come to a reasoned
balance between those two. Certainly, yeah. And in community? I think in the
community with artists who are creating
the work, I mean, we already have a great amount
of work in the public domain that I think could be used to
help train these systems if we have an expansive eye. And-- I mean, it's
important to mention, too, that Bessie Smith's
heirs lost that case. King won his case,
Bessie Smith lost hers. And in the dialogue that I've
been in with this case law over time, it struck me that
this question of the commons is opened up over
and over again. And people have
said several times that a voice is
not copyrightable. Voices change over
time for one thing. And is your voice, the
unique sound of it, or the words, right? And so this is an open question,
I think, but the commons are, of course, absolutely
key, but we still need to expand the commons. I mean, this question of
gathering your own data, I think it's important
because the end product is not inherently more important than
the process through which you get there. And I think holding
those things in tension is actually what's needed
philosophically at this moment. Yeah. Very good. OK. Maybe we'll keep going. Next question--
thank you, Joshua. Hi. First, I want to say thank
you to the panel and MIT. I consider myself
very fortunate to be here and hearing all of this. My son goes to a school called
the Carroll School, which is for dyslexic kids. And when he started there,
the ex-head of the school asked me read a book
called In the Mind's Eye by a professor named West. And it was-- the
premise of the book is that visual learners,
dyslexics, are predisposed to all the advances
in technology. And I read the book-- it's not
the easiest book get through, but very interesting. Until this week,
I didn't get it. And so listening to all of you
and talking and examples that you've all made about visuals
and how that's part of AI and advancing it, so I gotta ask
the question-- is it correct, that premise, that
dyslexics are uniquely-- have the unique skills,
as we move into this time, as the book says,
of visual learning? So I know I'm a little
self-interested in asking, but there are a lot
of us out there. Well, I might preface
this by saying you may not have expertise in dyslexia,
but I think all of us are educators, and clearly there
are many different learning styles. But please, anyone want
to tackle that question? Well, I would say that even
before I became so popular, we are moving gradually
towards a world where visuals are more important. So I think that's one of
the wonderful things that is happening today,
that if a kid is not good at just absorbing knowledge
or whatever through text, there are now totally different
forms that you can use. Like Pat, my student
Pat showed his Leonardo. Instead of reading Leonardo's
journals, you can talk to him and ask him to illustrate
things from his journals. Maybe some of you will think
that that's not the same thing or that the voice
is not authentic, although we try to make sure
that what is-- that it only says things that
Leonardo actually wrote. But it's a more interactive
and possibly more engaging way to absorb
some of that material. Yeah. The book on prodigies has
become a book about giftedness and largely a book
about teaching deaf and blind children in
the segregated South somehow. I pitched it as a book on
prodigies, it got picked up, and then it turned into that. And so I've been thinking a
lot about disability education, and especially this frame of
giftedness, and how the way I learned to think
about giftedness in both a kind of elite New
York City private school setting in high school. But first, in this experimental
independent school in Harlem called The Modern School
where we put on plays and we painted, and we
played outside all the time, and our parents were heart
surgeons and janitors and came from a
whole constellation of professional backgrounds,
I learned at a very early age that there was something about
this thing called giftedness that had nothing to do with
a score on a piece of paper. It had nothing to
do with the metrics that I inherited later in
life that would tell me I was smart or
beautiful or creative. And so what I
hope-- and it sounds like I'm hearing from you--
and good on you for reading the books that your
kid is reading. That's a practice
I'm getting into and it's a beautiful thing. But it sounds like you already
have that sense that we all have our distinct minds
and that a gift is something you give away. Nobody else can
determine it for you. And so what I hope
is that at its best, this new technology
will be used to reach the most expansive
group of kids possible, and that will inherently have
kids with disabilities in it. It will have kids who've been
told they're unchosen and don't fit anywhere in it. And so that's one of my
biggest and best dreams for what we can do with this. Beautiful. OK, maybe we'll keep going. Thank you. Thank you very much. Hey, there. Hi. So thank you for the talk. It was fantastic. I also had the opportunity on
Tuesday to see Refik's keynote, and it was fascinating. And it leads me to this
question that I had since then. So songwriters, book writers,
artists, creators in general, often state their inspiration. Some elements that they inspire
from to create their own-- well, their own creations. If we pass this
to the AI domain, it's often more complicated
to tag these differences because in human creation,
if that inspiration is taken further, it's plagiarism,
it borders plagiarism. I feel that we're not
ready to tag correctly what is AI imitation,
what is AI innovation. So whether it's with
the current technology or with the technology that
will come in the future-- and I'm talking now
maybe AI sentience, real innovation from AI, are
we ready to tag that correctly? Anyone? I think that will always
be an open question where you define that boundary. I think that's already
the case in music, for example, that
there are lawsuits-- I mean, musicians are always
borrowing from other musicians, and in jazz, for example, that's
what it's all about, almost, referencing others and so on. But then we're
constantly arguing over where the boundaries of standing
on the shoulders of others versus stealing. Yeah. Yeah. Anyone else? Maybe we'll go try to get
through these last three questions if we can since you're
all been standing patiently. So please. I'll be brief. My name is Lawrence. I'm out here also
visiting from California where we just had
the writers strike end, which was very painful. And a big point in that
was saying no to AI. I think that you
all here-- this day has been fascinating in
showing the capacity for AI to open doors to
our higher selves. Mm-hmm, mm-hmm. But when there's the
corporate powers that have the keys to the
car, they don't always rise to their higher selves. I mean, Pelin, you talked
about this kind collaboration among your colleagues. How can you-- or we all as
the leaders in this industry implore or help the
corporate folks who have the most capacity
to make the most money do the right thing? Not do what we see with
Smith versus Columbia? Yeah. Yeah. Ha, that's-- That's a big question. Big question not just for art
and creativity, but for AI in general. I mean, AI is defined as-- by Turing over 60 years ago as
surpassing human intelligence, and the whole goal of
the whole research field is to ultimately be better-- make something that can
do more than people. And unfortunately-- or I think
that's unfortunate there's always been another
movement which is about augmenting people
and supporting people in being creative in
everything and intelligence with people like Engelbart
and Licklider and so on. But unfortunately, the
ones that are about, let's compete with
people and be better have are dominating right now
rather than the movement that is trying to support
human intelligence and augment human
intelligence and, yeah. I'd like to respond
by maybe talking about something philosophical
about creativity, but then, again, from
a tangible perspective. If you define creativity as
simply creating something new, then AI can be creative and
it can replace any human. But if your definition
of creativity is creating something
new and valuable, then I think we all
have some responsibility to make sure that that
something valuable does not intersect or destroy human
values that we already have. So that would be a
good perspective, I think, going forward
in terms of ethical-- making ethical decisions
around AI implementation. Yeah. And I would just
say quickly, I think you all are already doing it. You went on strike. And you didn't take the
argument just to the bosses, you took it to the public. Yeah. And I think a bunch
of us said, oh, wait, this entire
industry is underfunded, people can't feed themselves
or support their families. I don't love movies just
because they're beautiful, I love movies and television
because people made it. Yeah. And as my friend Tongo
says, like politics mean people did it
and people do it. And I think film art is
a similar kind of thing. So I don't know how much
advice you need from us. Like, you took the labor
power in your hand and-- but I think it's
really important. You made it a public argument. You made them say, OK, you
want to have no background actors ever? You want to fill that with
computer-generated bodies? And I think a bunch of us
said, yeah, dude, that's sick. That sucks. I don't want to watch that. And a human. And so you won the hearts
and minds of people by, I think, going right to the
human core of the thing itself. Exactly. OK, we're almost out of time. Just very quickly. Hello. I'm Akash, co-founder
of an AI company. I'm a statistician. They say that
stories come from-- they ask the question,
where do stories come from? And they answer, stories
come from other stories, including the author's
individual opinion about the society and the
time in which he is living. Also about-- also
dependent on other authors that he is inspired with. This whole process
of combining whatever inspires an author is-- you can define that
as imagination. So if that is the definition,
historical, canonical definition of imagination,
then what AI is doing-- conjoining, combining
in interesting ways of other stories, is essentially
impacting this industry more than any other industry. I don't see AI coming up with
new mathematical theorems, but AI can come up with stories
which mimics an author's imaginative process. If that is the case, how
do you define imagination for an author today? Joshua, would you
like to take this-- Yeah! As the poet-in-residence? I mean, this is
complicated because I've tried to use a number-- Bard, ChatGPT, the whole thing. And it doesn't swing. Like this is someone raised
by musicians and writers and actors, it does not
aspire toward the sound of Whitney Houston's voice, or
Toni Morrison or August Wilson or James Baldwin. It doesn't even
approximate or approach it. And we could say maybe
it will in five years, maybe in 10 years. But I don't even know what that
would mean, in part because I think the thing that sparks my
joy and interest when I read those books is the sense
of another consciousness across time that I'm connected
to a real human person. To me, that's imagination. Like imagination comes
from a human being. We're not prediction
machines, we are listeners. We take our influence
from everywhere. But we're not just
predicting what the next word in
a sentence will be based on all the
sentences we've heard. We're playing. It's jazz. We're playing in open air. And we're riffing
on one another. And I just feel like
that's a distinction we want to hold onto, in part
so AI can become something more beautiful. Actually, to say, yeah,
imagination is our work, this is a tool we use
in the service maybe of human imagination, but
let's work out those orbits and let them be what they are. May we please stop there? That was just fabulous. Yeah, that was amazing. It doesn't swing. There's your answer. [APPLAUSE] I love that. Please join me in thanking
this amazing panel. Thank you. Thank you. OK. So to wrap things
up now, there is, of course, an
additional afternoon symposium on the AI in
the future of commerce, impact of commerce. And David, there's
so much we could say, but I just want to say thanks
to you and to the Media Lab for having us here. We hope all of you
enjoyed the morning. Boy, do we have a
lot to think about. We have a lot to think about. This is the beginning of the
discussion with all of you. Thank you for coming to MIT,
joining our Gen AI Week. And Joan, I want to thank
you, the Morningside Academy of Design, the Media
Lab students, the MAD students, all the MIT students who joined
us and inspire us every day. Patti, thank you so much for
organizing all the students, working with us. And boy, to our panelists,
something special. Panelists and students,
we're so grateful to everyone who participated. And if you want to meet a
human dinosaur-AI mash-up, Pat's right here
in the front row. So thank you all again for
being here, have a great day. [APPLAUSE] Thanks, Betty.