- I remember early in
the days of OpenAI, when I was covering it,
I mean people would joke, like if you ask any employee
what we're actually trying to do here and what AGI is, you're gonna get a different answer. Yeah, artificial general
intelligence, AGI, this term is not actually defined. There's no shared consensus
around what AGI is- and of course there's no consensus around what is good for humanity. So if you're going to peg
your mission to like really, really vague terminology
that doesn't really have much of a definition, what it
actually means is it's really vulnerable to ideological interpretation. - Today on Big Think,
we're gonna be talking with Karen Hao, a contributing
writer for The Atlantic who focuses on technology
and its impacts on society. In this interview, we're gonna
be talking about artificial intelligence and specifically OpenAI and the events that led to the ouster and reinstatement of Sam Altman as CEO. Karen, thank you for joining
us on Big Think today. - Thank you so much for having me. - So Karen, I'm curious what
happened at OpenAI recently? - That's a great question. So in the last week we
sort of saw a very dramatic ousting of the CEO by the
board of the company, the revolt of hundreds of
employees after this happened, and then the reinstatement of the CEO. And to sort of understand what actually happened in this
very chaotic moment, we kind of have to first look at the way that the company was founded. OpenAI is very different from
a traditional tech company in that it was actually founded
as a nonprofit specifically to resist the tech industry. Elon Musk and Sam Altman
co-founded the company on the basis that artificial intelligence
is a very important technology for our our time, our era,
and it needs to be shepherded; the development of it needs to
be shepherded very carefully, and therefore it should
not actually be attached to for-profit motivated companies. And so they founded it
as a nonprofit in 2015, a few years down the line
in 2019, they realized that this nonprofit structure
was not actually a way, it wasn't actually gonna
help them raise enough money to perform the specific type
of AI research that they wanted to do, which was going to be
very, very capital intensive. And so in 2019 they did
this really weird thing where they nested a for-profit entity or what they call a "capped-profit" entity under the nonprofit. And so what we saw with the
board's actions firing the CEO, Sam Altman, is the nonprofit
has a board of directors that are beholden not to shareholders, but to a original, the original
mission of OpenAI, which is to try to create artificial
general intelligence for the benefit of humanity. And so the board, they
haven't actually specified why they fired the CEO, but they said that they acted
with the mission in mind. So absolutely nothing related to, for sort of fiduciary responsibility,
nothing related to the customers that OpenAI has, but specifically they felt that the company was no
longer headed in the direction that they thought was
aligned with the mission and therefore they ousted the CEO. - There's so much that even just goes into what you were just talking about. I'm curious to, to go back to
talking about this nonprofit genesis of OpenAI and sort
of how that came to be. Oh, you, you mentioned sort
of it was built around trying to not have profit motives necessarily, but sort of just figure out a safe way to shepherd the creation
of artificial intelligence as tools for humanity. I would love it if you could
walk me through the origins of OpenAI as an organization. - Absolutely. So one of the things to sort of understand about AI
research in general is AI is not actually new. It's been around since the '50s, and it was originally an
academic field that then tech giants in Silicon
Valley started seeing massive commercial potential for, and so they kind of
plucked this technology out of the scientific academic realm and then tried to start
deploying it into products. And the thing that's happened
in the last decade in particular is that there's
been an enormous shift in the field where because tech
giants have realized that this technology can be
very, very lucrative, Google and Facebook, they use it
for things like ad targeting. They have increasingly pulled more and more researchers
from the academic field, from universities into their corporations to develop this technology,
not for scientific discovery, not for any goal other than specifically that they would like to commercialize and continue to make more money. So the reason why OpenAI was
founded as a nonprofit, I- the story goes that Elon Musk
in particular was very worried about Google because
Google had been early, an early mover in recognizing
the commercial potential of AI, started building
this really big lab and kind of poaching all
of the top talent from all around the world and trying to basically establish a
stronghold in AI leadership. And Elon Musk felt that this
was not the appropriate way to develop AI because AI could be very beneficial for many different things, not
just for commercial products. And that actually, if the
development of AI were attached to commercialization, it
could in fact also be harmful because of things that we
were already sort of starting to see at the time around social media and sort of for-profit
incentives corrupting the use of a powerful technology. And so that was ultimately
the vision this, this nonprofit vision. But the thing that sort of
thwarted, I guess this vision is the fact that when OpenAI went to hire its first founding team- its founding team had 10 members- they specifically brought on this researcher called Ilya Sutskever, who is now OpenAI's chief
scientist. At the time, he was at Google and he already had
a very prestigious reputation. He was the co-author of a
groundbreaking AI research paper that had actually sparked a lot of the commercialization
efforts around this technology. And when they brought him on Ilya Sutskever had a very
particular philosophy around AI research,
which was that in order to see the full potential
of this technology, we need to scale it dramatically. So there were sort of different competing
philosophies at the time. Do we actually have the
techniques for AI, advanced AI or do we actually need to
create more techniques? And he thought we have
it, we just have to sort of explode the scale, feed
evermore data, evermore computer chips into these AI models and that's when we'll start
seeing real emergent intelligent behaviors in these digital technologies. So when he made that decision and when OpenAI set on that path, that's when they started
running into financial issues and realized the nonprofit
was no longer viable. - I think it's interesting
that that was sort of a a tension point. They had this nonprofit mention to sort of make this technology and, and really the 'open' in their
name sort of goes back to a lot of things like open source
I imagine as as part of the initial founding, but then they realized that
they had to have some sort of commercial arm for the technology. I'm curious when that
decision was made, how many of the players that were involved in what happened just recently
were also at OpenAI at the time when they made that decision to create the commercial
entity, the for-profit en entity underneath the nonprofit. - There were sort of three
main characters at OpenAI for this week of events: There's the CEO Sam Altman, there's the chief
scientist Ilya Sutskever, and then there's the
president Greg Brockman. All three of them were the ones
that created this nonprofit, capped-profit model. So they were the architects of it. And at the time I had actually
interviewed Greg Brockman and Ilya Sutskever about a few months after they had created that model. And they were very sincere
about this idea that even though they needed to change the nonprofit
structure a little bit in order to raise enough capital for
the things that they wanted to do, that this was somehow
sort of the perfect solve. Like that they were creating
this clever solution to the central problem
of wanting to raise money but also be beholden to the mission. And what's interesting is, I mean now in hindsight we can say that it was this very structure that kind of created a very dramatic
ousting of the CEO and then the reinstatement of him and now a, a lot of
uncertainty in the air about what the future of the
company will continue to be. But that was the original
intention was they felt that they needed to change
the structure in order to get the money, but
that they didn't want to actually change the mission. - So were there any consequences
when OpenAI switched from being a nonprofit to having
the capped-profit entity underneath the nonprofit? - I think the main consequence
was actually just that employees suddenly were
getting higher compensation. So, so there was, you know,
there was a little bit of controversy within the company. There were people that were, that had joined OpenAI on the premise that I would be a nonprofit. So they were worried about what is this legal structure
that is suddenly emerging? What do you mean that we're
turning into a for-profit, capped-profit kind of hybrid. But one of the things that
this kind of model enabled was that OpenAI started paying
employees more. Within the world of AI research, there really actually aren't
that many senior researchers because this field, even
though it's been around for decades, there aren't
that many people in the world that have the kind of
skills that they need to develop this kind of technology and that have also
specifically worked in kind of environments where they
know how to commercialize and productize it as well. And so OpenAI was actually
losing a lot of its talent, like it would hire talent,
try to retain them, but then lose the talent because Google or DeepMind, which were two
different entities at the time, were just paying more. And by changing into this
weird hybrid structure and raising venture funding,
they were able to issue stocks and start giving much higher
compensation packages based not just in cash but also in stocks to this capped-profit arm. So that was honestly
the main consequence in that moment in time was
they finally were able to compete on talent. But then of course with
kind of this model, the reason why they set it up was so that they could get the investment in. And once you start getting
investment in the biggest investor of which was Microsoft, that's when you start also having strings attached to the money. And that's when the kind
of move towards more and more commercialization and less and less research
started happening. - So who is Sam Altman and how does his role as CEO sort of just play into this picture and the, and potentially to the
board's decision to let him go and then eventually rehire him? - Before Altman was CEO of
OpenAI, he was president of Y Combinator, which is
arguably the most famous Silicon Valley startup incubator. And basically as the head
he became the president. I mean- he inherited
it from Paul Graham who was the original
founder of Y Combinator. And at the time when he
was hired as the president, he was really young, I
can't remember exactly, but he was early thirties I believe. And people were really surprised. They were like, "Who is this guy?" And then he rapidly made
a name for himself as one of the most kind of legendary investors that was really good
at taking startup ideas and then scaling them
massively into successful aggressive tech behemoths. And so you can kind of see with this particular
career path how his imprint has been left on OpenAI because OpenAI before he became officially the CEO, even though he co-founded it,
he wasn't taking very active of a role until 2019 when he officially stepped into the CEO role. And before 2019 OpenAI was, it was, I mean it was a nonprofit,
it was basically kind of just academic, like it kind of just operated like a university lab. People saw it as an alternative
to being a professor where you get to do this like fun research and there's not really
any strings attached- and you also get paid a lot more. And the moment that Sam
joins the company in 2019, or the nonprofit at the time in 2019, that's when you start seeing the push to commercialize the push to scale. You know, like after ChatGPT, OpenAI now has a growth
team that's dedicated to growing its user base. I mean this is, you would never see that with a academically focused
or research focused lab, but it's certainly kind of like an iconic feature of kind of the
types of startups that Altman was shepherding
into the world as president of of YC. So I think he is a bit
of a polarizing figure. When I've been interviewing
employees, current and former employees,
this has sort of come up as some people see him as you know, one of the most legendary
people within the valley and just love and follow his leadership. Other people find him
very difficult to read and very difficult to pin down in terms of what he actually believes as a person, which makes them very nervous. And some people would go as far as to say that he's a little bit
duplicitous in this regard. And it is even for me, like
I find it very difficult to pin him down and, and what
does he ultimately believe and so did he rapidly, you know, start commercializing OpenAI because he believes truly in
the techno-optimist narrative of reaching this, this is
how you reach beneficial AGI or is it actually a bit of a habit? You know, he's been doing
this for so long that by default he just gravitates towards what he knows what he's good at. Another kind of example of
this is when he joined OpenAI, he started a fund for OpenAI
to invest in other startups. And at the time people were like, "Why is OpenAI investing in
other startups when they themselves are not profitable?" And it's, "Well Sam Altman's an investor!" So like, it's just sort
of habitual for him. I can't personally say
like what he truly believes as a person or what his
values are as a person, but certainly from his career you can see that it makes a lot of
sense why OpenAI has headed in the way that it has. - And I'm curious to talk
about Greg Brockman a bit. You know, he's a, he's been around technology in Silicon Valley for a while. He was part of an, one of the
original members of Stripe, I believe he helped ship some of the early versions of that product. And you know, one of the pieces
of commentary that I saw was that if you're building a ChatGPT wrapper and you're using Stripe
for your payment system, it's very likely that Greg Brockman built
like 50% of your application. So I'm curious to know
more about him and his role in what happened, and just how people conceive
of him as a technologist, but also in his role in OpenAI. - Greg Bachman is very
similar to Sam in that he also has had his
full career in startups. He went originally went to
Harvard, dropped out, then went to MIT, dropped out, and went straight to the
valley, joined Stripe, became the chief technology officer. His adrenaline rush comes from building and shipping products and seeing people use the things that he's made with his own two hands. Like he talked about that a
lot when I interviewed him, right when Microsoft invested, it made its first investment in OpenAI. So that is also his instinct. And what's interesting to
take a step back from OpenAI within kind of the landscape
of different AI firms, OpenAI is seen by others
as the most Silicon Valley of them all because you could, like at the time when it was founded, DeepMind was the other kind of major model for this kind of research. And it was seen as this is
like a very academic endeavor. We, yes, we've been acquired by Google and we're gonna help Google
commercialize some things, but like it is still going to retain this very
academic environment and and be kind of away from the
Silicon Valley scene of build, build, build and move
fast and all of that. So the fact that Sam Altman and Greg Brockman both
come from this we're seen by the broader AI
community as, okay, this is what OpenAI is about now
when when Sam Altman joined and then Greg was already
there, they like joined forces. It's seen as neither of
them are AI researchers, they don't come from
an academic background, they come from this
commercialization background. Greg was the first employee
of OpenAI in that when Sam Altman and Elon Musk decided, "Hey,
we should put down money to fund this thing,"
Brockman was the first one to raise his hand to say,
"I will make it happen." And he was the one that
recruited the first original team of 10 and then he led the company before Altman stepped in as CEO and he's very much a Sam ally. So part of the reason why in the events when Sam got
fired, Greg immediately announced that he was leaving as well is because these two have a very
close relationship and they share kind of
core ideologies around what's good for the world. And so Greg, when he left, I think that was a moment for
employees that was very scary because when the board says to you that your CEO has been
fired, the first instinct is, "Oh well what did he do?" But when the first employee and co-founder and huge ally of the CEO and one of the most,
you know, senior people also announces that he's leaving, that was when the employees
went, "Oh crap, like what does this actually mean for the functioning of this organization? And is this actually
somehow nothing to do with what Sam did, but somehow a power grab." So I think he is sort
of like very respected and like an early employee that did build, he did engineer a lot of the
things early in the company, and he does have a lot of sway as well within the organization. And he was kind of the canary
in the coal mine I guess you could say for employees, that something bad was happening. - I'm curious to
talk about the employees because it it, I mean you sort of just mentioned it right there that it it must've been a
whirlwind experience for them and you know, nobody had advanced notice that the board was doing, not
even the investors of OpenAI of the, of the capped-profit
entity had advanced knowledge- and the employees learned
basically when everybody else was learning or or shortly before
about what was happening. What have you heard about what
it was like for the employees to experience this and they, we sort of saw
this rallying cry around the, the leadership team that
was exiled from the company. Just talk to me about just what happened after the news broke and
how employees were feeling and just the events that
occurred afterwards. - I think it was a very
tumultuous and very emotional and very sleep deprived period of days after Altman was fired and
reinstated for the employees. Of course, like you
said, none of them knew that this was happening, they
had no idea what was going on and the board never explained really why they had ultimately fired Altman. And so it kind of, the, the progression of like their emotions went
from like confusion to fear when Brockman leaves and then three senior
research scientists also leave two anger at the board, like
really, really deep anger because they were like, "If you're going to do something dramatic, we
deserve answers as employees." And when they didn't, the, the longer they didn't
get answers, the more and more worked up it became. And part of this is, I mean many companies within Silicon Valley have this- they really emphasize that
companies are families and you as an employee are not just an employee of any company,
you, it is your identity OpenAI takes this to the max,
like the fact that they say that their mission is for
the benefit of humanity. Like people genuinely believe this and they think that this is like they're dedicating their life to this. It's not just like, "This is
my job and then I go home." This is like all, all they
think about sometimes. And so it's like that
level of anger of like, if you are going to do something that could ruin this company
that we genuinely believe is doing good for the world, like
how dare you not tell us why and how dare you continue to kind of leave us in the dark and and and not treat us as like
critical stakeholders in this, in this whole fiasco. And so what happened was kind of organically the employees started rapidly organizing on Twitter. So they, they started like
posting very similar messages by the hundreds kind of on
Twitter of like every time Sam Altman said, "I love
OpenAI so much, I miss it." You know, you would see
like employees retweeting it with a heart emoji and just it would, like when I opened my Twitter
feed, it was just like dozens and dozens and dozens of heart emojis. Not because I was looking at
any like OpenAI specific feed that was just what was
showing up on my regular feed. And then there were the the like OpenAI is nothing without its people that everyone started tweeting as well. And that was sort of a way to try and pressure the board to give answers. And then of course that
ultimately escalated to over 700 employees outta
770 signing a letter saying that like, if Sam is not reinstated, they're all gonna quit. And so I think another dimension
that's sort of important to add to this is most if not all of the OpenAI employees, their compensation
packages are majority stock and Bloomberg has a good
article on this, you know like the average compensation is
around like 800,000 to a million dollars and maybe 60% or something of that like
that is actually stock. So if a company, if the company
does not exist anymore, all of a sudden your stock goes to zero. And that was also extremely
stressful for people because people were
banking on, you know, some, some people were, had already
bought houses based on projected income or were
looking to buy houses based on the projected income that
were suddenly worried about paying their mortgage. There were people that were on visas that if the company doesn't exist anymore and they don't get hired
fast, then their ability to stay in the country is jeopardized and maybe they already have family and then like, you know,
that's gonna throw their entire family into disarray as well. So there were a lot of
other aspects of it, not just the identity
or the ideology piece that led employees to kind
of have this very emotional and tumultuous time. And when Altman was reinstated
there were some great details that were reported in
the information about how employees like gathered at the
office and they were crying and cheering and just, it
was like a huge massive sigh of relief honestly, that
they have their jobs still and that this company still exists and all the things that they've
been working towards are going to continue to exist
in some form or other and that they can move on
with their lives, basically. - This recent situation with OpenAI is not the first
time this company has gone through something like this. I would love for you to walk
me through some of the history of the disruptions that have
happened inside this company and some of the consequences
that those events have meant for OpenAI and the
rest of the AI industry. - One of the things,
just to take a step back before we kind of go through the, the tumultuous history
leading up to this point, one of the things that's kind
of unique about OpenAI, I mean you see this in a lot
of Silicon Valley companies, but OpenAI does this more
than anyone else I would say, which is they use incredibly vague terms to define what they're doing. Artificial general intelligence, AGI, this term is not actually defined. There's no shared consensus
around what AGI is and of course there's no consensus around what is good for humanity. So if you're going to peg
your mission to like really, really vague terminology
that doesn't really have much of a definition, what it
actually means is it's really vulnerable to ideological interpretation. So I remember early in the days of OpenAI when I was covering it, I mean people would joke
like if you ask any employee what we're actually trying to do here and what AGI is, you're
gonna get a different answer. And that was, that was sort of almost a feature rather than a bug
at the time in that they said, "You know, we're on a scientific
journey, we're trying to discover what AGI is." But the issue is that you actually just end up in a situation where when you are working
on a technology that is so powerful and so
consequential, you are going to have battles over the
control of the technology. And when it's so ill-defined
what it actually is, those battles become ideological. And so through the history
of the company, we've seen multiple instances when there
have been ideological clashes that have led to friction and fissures. The reason why most people
haven't heard of these other battles is because OpenAI
wasn't really in the public eye before, but the very first
battle that happened was between the two co-founders,
Elon Musk and Sam Altman. Elon Musk was disagreeing
with the company direction, was very, very frustrated,
tried to take the company over, Sam Altman refused. And so at the time Elon Musk
exited, this was in early 2018 and actually took all of the
money that he had promised to give OpenAI with him. And that's actually part of the reason why this for-profit entity
ends up getting constructed because in the moment that OpenAI realizes that they need exorbitant amounts of money to pursue the type of AI research that they wanna do is also
the moment when suddenly one of their biggest backers
just takes the money. The second like major kind
of fissure that happened was in 2020, and this was after OpenAI had developed GPT-3, which was a predecessor to ChatGPT. And this was when they
first started thinking about how do we commercialize,
how do we make money? And at the time they weren't thinking about a consumer-facing product, they were thinking about
a business product. So they developed the model for delivering through what's called an
application programming interface. So other companies could like
rapidly build apps on GPT-3. There were heavy disagreements over how to commercialize this model, when to commercialize the model,
whether there should be more waiting, more safety
research done on this. And that ultimately led
to the falling out of one of the very senior scientists
at the company, Dario Amodei, with Sam Altman,
Greg Brockman, and Ilya Sutskever. So he ended up leaving and taking a large chunk of
the team with him to found what is now- one of open AI's biggest
competitors, Anthropic. - AI has been a technology
that's had a lot of hype cycles and a lot of sort of failed
delivery on those hype cycles. I think a lot of folks
remember Watson from IBM and all the hype that surrounded that and was gonna revolutionize healthcare and a lot of those things
that didn't come to bear or even just the, the small
little colloquial examples of it playing Jeopardy or
some of the, the AI models that were playing AlphaGo
or chess and things like that. But one of the things I find
particularly interesting is that the, the fear
around these technologies and whether they're safe or
not actually cause some folks to not release these models
publicly, the the transformer, the general pre-trained
transformer that is the basis of this GPT technology
that OpenAI is using for these large language
models was actually developed inside of Google before it
became, you know, widely released to the public and utilized. I'm curious when those debates
were happening with the split with Anthropic and OpenAI, how was a similar sort of tension between we shouldn't be
releasing these models without thoroughly testing it, it's not
ready for public consumption and like what were the
contours of that conversation between the different
schools of thought on AI? - In general, including
the OpenAI-Anthropic split there have emerged kind of two major camps but also some sub-camps: So we'll review all of them, but the, but there's kind of two philosophies that exist within OpenAI and also the general AI community around how do you actually build beneficial AGI, and one of those camps is sort of in the most extreme version
is the techno-optimist camp of we get to beneficial AGI
by releasing things quickly, by releasing them iteratively
so people become more familiar with the technology so
institutions can evolve and adapt instead of,
you know, withholding it until suddenly capabilities
become extremely dramatic and then releasing it onto the world. And also that we build
it more beneficially by commercializing it so
that we have the money to continue doing safety research, what's called safety research. The other major camp is
basically sort of like the, the existential-risk camp again,
kind of the extreme version of this camp, which basically
says we, in order to get to beneficial AGI, we
don't want to release it until we know for sure that we've like done all
of the possible testing. We've like tweaked it and tuned it and tried to foresee as much as possible how this model is going
to affect the world. And only then do we
maybe start releasing it and making sure that it it it only produces positive outcomes. I think both of these, these
are both very, very extreme in the sense that they've almost become quasi-religious ideologies
around the development of AGI and like how to actually approach it. And there's sort of
many, you, you could say that each camp over the years has sort of cherry-picked examples to support why they are
correct in their argument. But when the OpenAI-Anthropic
split happened, it was exactly this disagreement. So Sam Altman and Greg Brockman, they
were very much, we need to continue releasing and
get people used to it, get more money in so that we can continue doing this research. And Dario Amodei and his sister Daniela Amodei
who was also at OpenAI, they were very much at
the camp of no we should, we should be doing as much
as possible to try and tweak and tune this model before
it goes out into the world. And that that was ultimately
sort of the clash that happened then and has
continued to happen ever since. - It's clear now that
OpenAI is shipping a lot of AI models available for consumers. It was, I think it's something
around like a hundred million users are on ChatGPT: What has changed in
terms of the perception of shipping these AI models to the public and how did that potentially
lay the groundwork for the firing of Sam Altman that we, we experienced last week? - So these camps existed in the company and have existed in the
company since the founding, but what happened in the
last year was the release of ChatGPT and just as
it was very shocking for everyone in the public and kind of a step-change
in people's understanding of the capabilities, it was also kind of a dramatic transition
point for the company itself. And part of the reason is when
you suddenly put a technology that you've been developing
in the company in the hands of a hundred million users,
you start to get kind of crazy strain on the company infrastructure and kind of test cases on the ideologies that had already been operating in the theoretical realm within this company. So for the techno-optimist
camp within OpenAI, they saw the success of ChatGPT and were seeing, you know,
all of these use cases of people using it in
wild and imaginative ways and they thought this is
the perfect demonstration of what we've been talking about all along; like we should be releasing
these technologies iteratively, watching people adapt to them and then look at all of the
amazing things that they do. Once that happens, we should continue
building on this momentum and continue advancing
the, the productization of our technology. For
the existential-risk camp ChatGPT was also the
perfect demonstration of all of the, the fears that they
had around harms of the tech, the technology, again, when you put the the technology in hands of a hundred million people, you're going to see some people using
it in really horrible ways, in really abusive ways. And the company was not prepared
for many of these in part because they didn't actually think that ChatGPT would be a big deal. So they did not in any
way prepare for supporting the a technology that's used
by a hundred million people. And so one of the things that the existential-risk camp
got very, very scared of was if we couldn't predict even how this technology would be popular, how could we predict how this technology could be devastating? ChatGPT was sort of an accelerator for both camps in their ideology towards polar opposite extremes. And the reason why, or we don't, again, we don't know the actual
reason why the board ended up trying to fire Sam Altman, but I think this is sort of,
this context is very telling because ultimately what we saw with the board ousting Altman
is this kind of struggle between the nonprofit and for-profit kind of arms of the company where the board says the nonprofit
is still the, the mission and the fact that we're not
actually doing this for money still should be the the
central path forward. Whereas all of these people
within the for-profit arm and Sam Altman himselfm were
thinking, "No, we need to, we need to continue pushing ahead with this, this commercialization effort." So that kind of collision
I think very likely, very strongly played into
the board's decisions. - And what do we actually know
about the board's decision to fire Sam Altman? Has there been any new
information that has come in that has sort of clarified
why the board has made its decision or are we still
sort of in the dark about what was happening, what
inputs led to that moment? - We are still completely in the dark. The board has issued
very, very few statements, I believe actually even just one statement and it was just the one that
they issued when they fired Altman and it, it just said that they had lost trust
in Altman as a leader. They had lost trust in him being consistently
candid in his communications and that was, they
haven't really updated it. And I have sort of been
talking with a lot of sources within the company or former employees as well
since the events have happened and none of the employees
know either this, there has been no communication
from the board internally. There has been of course
massive amounts of speculation. One of the things that came out after Altman was reinstated
was reports from Reuters and The Information about how there had been staff
researchers that had sent a letter to the board warning about a so-called breakthrough
within the company; OpenAI confirmed to me that
this was not actually related to the board's actions. And one of the things to sort of note in this particular instance is "breakthrough" is a very subjective term, and so it's in science
usually breakthrough- something becomes a
breakthrough when there has been significant testing validation, continued experimentation on an idea and a consensus forms around whether or not something actually
was a breakthrough because it withstood the test of time. In this instance there's no,
there's no kind of contestation and validation that's happening because all of this is within
OpenAI's closed doors. And so breakthrough in this
instance was sort of a term that was assigned to it-
it was like sort of a subjective interpretation that several employees assigned to what they were seeing within the company, and not everyone within
the company even agrees with this small contingent that supposedly sent
the letter to the board. - It seems like there's
a lot of like in the eye of the beholder in the AI industry where folks aren't necessarily sure, they're not necessarily
sure what they've built, they're not fully aware
of all the capabilities of these tools because
it's, it's almost impossible to test something like a large
language model completely in terms of all the things
that it's capable of doing. From your conversations with people, how can you even just
like tell that something is worthwhile to consider
to be dangerous or not? Like what sort of steps do they go through if they're really
concerned with safety inside of these organizations;
how do they deal with that concern in an actionable way? - One of the things that OpenAI
has started, has developed and Anthropic and Google also do this now too, is this process called
'red teaming' where they try to bring along a wide array of experts from cybersecurity backgrounds or national security backgrounds or biology backgrounds to try to make the model do bad
things before they release it. So they did this process with GPT-4 and then typically the way that it goes is when the
expert figures out, "Oh I can do X-thing with the model,
like I can create malware very easily with this model," then the researchers within
the company take that feedback and try to iteratively refine the model until it stops doing that. Like it'll, it'll refuse
the request if someone says, "Please code me up a malware." This process has also been
very heavily criticized because OpenAI sort of has from the very beginning
had a particular bent around how they perceive risks, which
is more focused on extreme risks like such as these, these existential risks, more
focused on national security risks, and they sort of have often ignored other types of harms that are very present and impactful to many people here and now. One of the
examples is discrimination: AI models have a lot of
discrimination baked in because they are reflective of the data that they're trained
on and it is very hard, basically impossible, to
get a data sample that is perfectly, you know, aligned in every different dimension of what we wanna capture about our world. So one of the classic examples of this is facial recognition does not work as well on dark-skinned individuals and doesn't work as well on women than light-skinned individuals and men. And part of the reason was
when the technology was first developed, the photos that were gathered for training this technology
were predominantly light-skinned male individuals. And this is true of every AI model that has ever been developed
with large language models. We've also seen, you know, GPT-4 or we've seen with large
language models in general that if you kind of prompt it to talk about engineers it will often
use the pronouns he/him instead of she/her or they/them. And those are all like, they're kind of codified into the model and OpenAI has done research
on this more recently, but historically this is one
thing that they've set kind of like de-emphasized within the company and they focused more on
these kind of extreme risks, and OpenAI has continued to also very heavily rely on sort of experts that they curate themselves. And again, this is all, this
all goes back to sort of like who has the power in this situation and how do we define
what this technology is, whether it works, who it works for. And so much of the way that
OpenAI operates is kind of through their lens with their
choices, their determinations, and with very, very little
feedback from, you know, the vast broader population in the world. - I'm curious to put the, the conversation around safety in the way that, you know, people are defining whether a
model is safe in the context of the recent recent upheaval. If part of this philosophical
differences was the baseline, or at least perhaps of for some of the reasons why the board
were, were to fire Sam Altman, how can they actually change
the way in which they operate to where that's no longer sort
of this debate internally in inside of them and they can
move forward with a, an approach to releasing these models
that is- helps with some of the commercialization aspect where they need the capital in order to get more researchers in
and build these models and complete what they say
is their mission of building artificial general intelligence, but also help with their mission of doing this in a safe way. - I wanna start by unpacking
the word "safety" first. And I know we've sort of
been talking about a lot of different words with
squishy definitions, but safety is another one
of those where AI safety as OpenAI defines it,
is kind of different from what we would typically think around like engineering safety. You know, there, there have
been other disciplines, you know, like when we talk
about a bridge being safe, it means that it holds up and it works and it resists kind of
collapsing under the weight of a normal volume of traffic or even like a massive volume of traffic. With AI safety, the brand
of OpenAI's AI safety, they- it is more
related to this, this kind of extreme risk they have. Again, they have started adopting more of this like also focusing on current harms like discrimination, but it is primarily focused
on these extreme risks. So the question I guess to
to kind of reiterate is sort of like will OpenAI
continue to focus on research that is very heavily
indexed on extreme risks? I think so, but how are they
going to change the structure to make sure that these ideological clashes don't happen again? I don't actually think that's possible, and I also think that part of what we learned from this weekend is that we shouldn't actually
be waiting for OpenAI to do something about this. There will always be
ideological struggles again because of this fundamental
problem that we have, which is that no one knows what AGI is, no one agrees with what it is. It's all a projection of your
own ideology, your own beliefs and the AI research talent pool and the broader Silicon Valley
talent pool of engineers, product managers, all
of those people are also ideologically split on these kind of techno-optimist versus
existential-risk divides. So the, even if you try
to restructure or rehire or shuffle things around, you're
always going to kind of get an encapsulation of this full range of ideological beliefs within the company, and you're going to end
up with these battles because of disagreements
around what is actually- what are we actually working on and how do we actually get there. So I personally think that
one of the biggest lessons to take away is for policymakers and for other members
of the general public and consumers to recognize
that this company and this technology is
very much made by people. It's very much the product
of conscious decisions and, and an imprint of very
specific ideologies. And if we actually want to
facilitate a better future with better AI technologies and AI technologies that are
also applied in better ways, and it's actually up to much
more than OpenAI it's up to policymakers to regulate
the company, it's up to consumers to make decisions that kind of financially pressure
the company to continue moving towards directions
that we collectively as a society believe are more appropriate. And ultimately what this boils down to is I think like AI is
such an important technology and so consequential for
everyone that it needs to have more democratic processes around its development and its governance. We can't really rely on a company or a board that is, you know, tiny to represent the interests
of all of humanity. - Yeah. In some ways the, the
mission of OpenAI for sort of doing it for the benefit
of all humanity is, is kind of interesting to have
in a technology space- computers broadly-where like
open source protocols and ways of working
has been such the norm. The board firing Sam Altman and ultimately rehiring
him, it sort of kicked off seemingly an awareness of the
importance, for some people, of open source AI development and particularly models in
that arena, which is, you know, you're, you're mentioning the
democratization of AI tools and some, for some folks that is the democratization of these tools. The fact that you can release
these models on GitHub or use them on Hugging Face and everybody has access to them. I'm, I'm curious the
acceleration of that space tied with the fact that there's
gonna be many more competitors that are looking
to capture customers off of the turmoil that existed within OpenAI; in some ways it feels like the race for being dominant players in this space might move much more quickly than it was before when it seemed like a
OpenAI was just the dominant player and nobody was going
to take away the customer base that they had been in the lead that they had had in the
models that they had created. I'm curious about your thoughts there. - I think what we've seen is not with, with open source models,
I would not describe that as democratizing AI development
in the way that I was sort of trying to evoke earlier. The thing about
democratizing AI development and governance is that
people should be also able to say that they don't
want things to be released. So, you know, Meta has taken,
they're sort of seen as a, as a bit of a champion around open source AI development. They've taken a stance of we're going to release these models and allow anyone to commercialize off of these models. But no one actually has a say
other than Meta about whether they released those models,
you know, so that in and of itself I think is undemocratic. And part of the, part of the
issue as well with the way that Meta so-called "open
sources" its AI models is they allow anyone to take it,
download it, and manipulate it, but they don't actually tell anyone how this technology was developed. So one of the things that
people have really pushed for heavily-researchers,
certain researchers have pushed for heavily-is the fact that Meta or any company could
actually open source the data that they use to train these models. You know, you could open source the data, understand far more about what's
actually being poured into these technologies and
you wouldn't actually accelerate necessarily- it's sudden proliferation everywhere. So one, one of the concerns
with the way that Meta behaves that, you know, an OpenAI or other type of company that has a more closed
model approach argues is, "Oh look Meta is just
accelerating this race, this competitive race and
actually creating more dangerous dynamics," sort of to
your question of like, does that actually make things worse? And, and I would say like
there are actually many ways to increase the democratic
governance of these, these technologies and kind
of the scientific examination of these technologies without
actually accelerating this race such as the data open sourcing. And by doing that you then
enable many more scientists to kind of study what actually we're- what are we even feeding these
models to then also study how what we feed in relates to what comes out. And then you end up hopefully
in a situation through a lot of more scientific research and debate and contestation just better models that don't break that work for more people that are hopefully less
computationally intensive and less data intensive so they're not as costly to the environment. And you would end up with- what I think would be more
beneficial AI development. - The dust hasn't completely settled yet where there it seems like
there's gonna be an investigation into the board's decision. Sam is no longer on the board, he was previously on the board, but part of the deal that
was made is that he is no longer on the board and
neither is Greg Brockman. There's going, there's a
temporary interim board that's going to be in a
hopefully apoint a larger board you know, given where
we are now, what to think of the, the big lessons going forward and potentially where this space is likely to move in 2024 just given
this upheaval at the end of the year that nobody was expecting? - I think the biggest lesson is that self-regulation just doesn't work. I mean this was the most creative and unique solution for self-regulation that has ever been seen potentially in Silicon Valley history. The idea of having a nonprofit
structure that is beholden to no financial anything- and clearly it spectacularly failed. And, and one of the things that I've sort of been thinking a lot about
throughout this whole sequence of events is could it have
actually actually happened any other way if you set up
the structure that way you, you know, like a lot of people
were criticizing the board, you know, they, they did not communicate things in the right way. They still have not actually
given full transparency about their decision, but arguably
they actually did their job. The job description that was
written by Altman, by Brockman, by Sutskever was that if they believed that the CEO was no longer upholding the mission, that they fire him. So at some point, it
probably was going to happen and the fallout would've been dramatic. And so the fact that it couldn't have happened
any other way suggests that in general, even this like super
creative over-engineered way of self-regulation is kind of a sham. And the only way to kind of get around regulating the development of
these technologies is to have governance by people who govern
by people that we've elected that already like the institutions
that we've already set up to do these kinds of things. And I think that is, that is
like the most important lesson and until we have, you
know, I mean I know a lot of policymakers are already
moving very aggressively on trying to figure out a way
to regulate these companies and regulate this technology,
but until we get to that point, and I also think
that just anyone in the world who is going to have AI
impacting them, their education, their work,
their life should recognize that they also have a voice in this. Like we collectively kind
of watch these events happening at OpenAI almost completely on the sidelines, right? Like none of us were able
to participate in this. None of us were able to kind of vocalize. But policymakers are working
on regulations right now. Like there are many Congress people that are working on
regulations, there are many agencies in the U.S. or
in Europe or in China or anywhere, everyone is sort of advancing on these regulations and in the U.S. we have a unique ability to actually tell the people
who represent us what we want to see in those regulations. And I think this was
sort of a wake-up call and a rallying moment for all
of us to realize that this is what happens when you take a step back and don't participate in
voicing your concerns around one of the most consequential technologies. - In the lead up to, you
know, the release of some of OpenAI's models, there, there's been
sort of like a speaking tour of folks in going to Washington
talking to legislators about AI and there was a, the worry at that time was about regulatory capture. Like are they, are folks
going to essentially gate the technology in such a way that smaller players are not
gonna be able to play ball? And we've seen regulatory
capture happen a lot within the political realm within Washington. But there's also this question of like effectiveness
in terms of regulation. Like just because the regulation
has passed doesn't mean it's actually a good regulation or if this body of
Congress is actually able to regulate this fast-moving technology well, like they can't even pass a budget, like how are they gonna keep
up with the pace of AI change? So I'm curious about that
as a tool for dealing with AI safety because it in some
sense it feels like one the the legislative body or processes or capable to be captured
by interested parties and two, even when they do
regulate sometimes they just do a poor job, they just miss the thing that is the key regulatory factor. So I'm curious about
your conception there, and how to deal with some
of the messiness that comes with those types of approaches to dealing with technological safety. - Regulatory capture is a huge issue and it is definitely a big
concern of mine in that and and one of, one of the reasons why we would naturally see regulatory
capture in this moment, regardless of whether it's
OpenAI at the helm, is that there is a particular
narrative that in order to understand and shepherd
AI development, you have to be an AI expert. And I think that that
narrative is completely wrong because if AI affects you, you have a say, and actually stories about people who are impacted in unexpected
ways by these technologies is, as a reporter, that is one of
the most enlightening types of stories for me in understanding how a technology should
be developed, is seeing how it falls apart and and seeing when sort of- things that were unanticipated end up
happening in the real world. And in OpenAI's case in
particular, they have also kind of tried to solidify this
narrative of expertise by also saying, "Well we're the only ones that see our models," without
necessarily acknowledging that it's in part because they won't let anyone else see them. And because regulators, because it is important
for regulators to engage with the developers of
these technologies, sort of by default, they just seek
out OpenAI's opinions on what they should do or Google's opinions on what they should do; Meta's opinions on what they should do. And that's when regulatory
capture happens is there's already a baseline belief that only people with expertise should participate. And then on top of that,
companies are like trying to entrench this and fuel this narrative and then policymakers
buy into the narrative. And that's how you end up
with like Sam Altman on this global tour seeing all the
heads of state and the heads of state not necessarily creating the same kind of grand welcome for other stakeholders within
this AI debate. You're right also that
there are concerns around how effective that regulation can be. I do think what I'm talking about with like having more
people speak up about how AI affects them and their concerns about the
technology is one antidote to ineffective regulation because the more that
policymakers can understand the literal real-world examples
of the technology interfacing with people, the more that they can design
regulation that is effective. But the other thing is I
think we focus a lot on kind of federal-level regulation and we focus a lot on
in our government, international regulation, but there's a lot that happens
at the local level as well, like school boards. Schools are thinking about how to incorporate AI in into
the classroom right now. And as a parent, as a teacher,
like you should have a say in that you are the one, if you're
a teacher, you're the one that's using this technology and you're the one that
knows your students. So you will be the most informed
in that kind of environment to say whether or not you think
this technology is going to help in kind of the general
mission to educate your kids. It's also like police
departments are acquiring AI technologies and people within cities should have a say as to
having more transparency around the acquisition of
these, these technologies and whether or not that
should be acquired at all. And I think in these
local contexts, sometimes these contexts, actually
regulation is more effective because it is more localized,
it is more bespoke to that context, and it also moves faster. So I think that is sort
of an important dimension to add is when I say "Speak up and voice your opinions," it's not just to the federal agencies, it's not just to the Congress people actually,
just like within your city, within your town, within your
school, within your workplace, these are all avenues in
which you can kind of speak up and help shepherd the
development, adoption and application of the technology. - Karen, thank you so much
for joining us on Big Think and sharing your expertise
with our audience about OpenAI and all the things that are
happening in the world of AI. - Thank you so much, Robert.