- Matthew, imagine this: It's 2030, and intelligent machines
can analyze, predict, control, and create as well or better than humans, even better than most knowledge workers with specific expertise. So does that mean executives should let autonomous artificial
intelligence run their companies, doing everything from creating
products to marketing them? - I'm not sure that it's worked out as everyone thought it
was going to work out back in 2024 when we
first thought about this. There are some complications
to handing control of these things over to
intelligent machines. - That's Matthew Sinclair, BCG partner and vice president of engineering at BCGX. And I'm your host, Patricia Sabga, filling in for Paul Michelman. Welcome to "Imagine This..." where we take a trip into the future that we hope will challenge the way you think and act today. Throughout history, technology
has replaced workers, especially those skilled
at making things by hand. Think of the loom or the printing press. But now technology is coming for knowledge workers, artists, writers, and yes, consultants. Today we're going to consider
a future where AI can generate and iterate ideas as well or
even better than most humans. Also joining us today
GENE, my AI-powered co-host. Welcome, GENE. Please go
ahead and introduce yourself. - Hello, Patricia, and hello
to everyone tuning in. I'm GENE, your AI-powered co-host, crafted to stir the pot with a bit of digital seasoning, delighted to be here to dive
deep into the possibilities and puzzles of AI in our future. Let's get this conversation started. (theme music) - A quick note on how we use GENE. GENE is processing and responding to our conversation in real time. We haven't scripted any of GENE's questions or commentary, although we do edit down
the entire conversation for length and clarity. Let's start by explaining
how we got here -- a future where AI is so good at creative tasks that it can replace knowledge workers with many years of experience. That is a terrifying prospect for many people, me included. But before we explore
those fears, tell us, Matt, how do you imagine that
technology will evolve to get us to that point? - Well, if we just simply
project forward from where we are today, the
improvements in capabilities of large language models and GenAI and AI in general has been
so profound over the last, I would say, five years
that it's relatively easy to imagine certainly general intelligence and maybe even superintelligence in the time frames we're talking about. - So what would AI do
that it can't do now? - So the big difference
between what AI can do today and what it can do in the future is one of the fundamental difference
between humans and machines. Humans are good at taste and wisdom and empathy and values and ethics, and machines are good at speed and efficiency and scale. In order to be humanlike or even supersede human capabilities, AI has to be able to
develop those extra skills, those taste- and wisdom-type capabilities. It really remains to be
seen whether that's a thing that a machine can do, but
if it is to reach human and exceed human capability, then that is something
that it will need to do. - OK, so if AI can move in and do this, this really speaks to the fears surrounding that future. What kind of workers would be most at risk of losing their jobs, their
livelihoods to robots, and what human skills
would still be needed? - So let me just give
you kind of a metaphor. If you ever want to
do something, you have to know why you want to do it, you have to know what you want to do, and then you have to know how to do it. OK? And so that is, if I want to hang a picture on the wall, I want to hang a picture on the wall because I would like to improve the outlook in my lounge room. I need a hammer and a nail, and I need to know what
those two things do. And then I have to know how to use the hammer and the nail to do it. Once machines work out
how to do things, right, then the power balance
shifts from knowing how to do something to knowing what to do. I call this the move from imperative to declarative interactions with machines. So imperative means I have to
know everything about not just what to do but how to do it, you know, hammering the nail or writing the code. But in a declarative world, what I do is I declare my intentions, I express my preferences to the machine, and then the machine goes and works about how to execute that. Now there's been a ton of implications for the way the workforce is set up, for the way creativity is expressed, certainly for how computer
programs are written, for how books are written maybe. - So in that declarative
world that you described, how do you envision the
overarching relationship between human intuition
and AI decision making? - I like to think about this in terms of what I call a creative loop. And the creative loop is a
very broad term that refers to any kind of knowledge work. You know, if you think about what an artist does when they're painting or an author when they're writing a book, you have an idea, you
generate some options, and then you select those, and you select those options down, and iterate, and so on. If you have a machine that can do parts of this creative loop for you,
a couple of things happen. You get to generate way more options than you otherwise would've. And so you start to explore spaces that humans on their own wouldn't have otherwise been able to explore. And you then can select
from those many more options, and your iterations can
occur much more quickly. So there's a whole field
called generative design, which is just exploding at the moment. The idea behind generative design is rather than designing
a part, say for a car, what I do is I express some parameters and some constraints that
part needs to satisfy, and I let the machine go and generate as many options as possible that satisfy those constraints. And then I, as the designer,
I look at the options, and I use my taste and wisdom
to select the options down to the ones that are most appropriate. And what happens in this,
particularly with parts and physical product design, you start to get these
very unusual-looking, organic-style parts
that are very different from what humans would do on their own. Humans tend to be very all
angles and straight lines, but when you let the machine
sort of run away with options that are outside
normal human experience, they generate these fascinating, organic-style designs that are lighter, say, stronger, use less material, and so on. And so that's one example
of how the machine working with a human, rather than replacing or augmenting the human, in this case, you get a much better outcome than you otherwise would've with either the machine or the human working on their own. - OK, so GENE really wants to weigh in here now. GENE, go ahead. - Matthew, intriguing points about the shift from imperative to declarative interactions. Could this change amplify
the risk of losing sight of ethical frameworks while chasing efficiency and decision making? How do we mitigate that? - Yeah, I think that's
a fantastic question. And the mitigation is, I think, to keep humans in the loop. For machines to have
complete autonomous control of things really runs the
risk of them getting away and doing things that
may be outside our values or outside our ethics. And so in my perfect world,
the perfect world of humans and machines working together, there's always a human in the
loop somewhere with some kind of oversight or some kind of co-creation partnership going on between the human and the machine. - So let's have, let's dive a little bit deeper into this. GENE, I really want to bring you in here. Take us inside, let's say,
a pharmaceutical company that has completely handed
over its entire research and design division to
AI bots that are capable of independent creative thought. What would the headquarters look like? How would this firm be organized, and what role would be left for humans? - In a pharmaceutical company
where AI handles all research and design, the headquarters
might resemble a high-tech hub bustling with data scientists and engineers rather than
traditional researchers. Humans would likely oversee AI decisions, ensuring ethical alignment and interpreting complex
data into actionable business strategies. - So I wonder if a machine in control of an R&D process would need to bring everyone physically together. I wonder if we might actually
see a much more distributed, physically distributed, R&D capability rather than a physically centralized one. I think the rest of
what GENE said is pretty much in line
with my thinking. I am very worried about us
losing control, particularly with something like drugs, you know, drug discovery, drug testing. Would we allow a world where
the machines were making decisions about how to
test drugs on humans? That seems like very high risk. - OK, so let's stick with that and broaden it a little bit because you're discussing one
of the potential drawbacks of letting computers take over humanity's creative process. So let's take that a little bit deeper because these large language
models have been built on the back of human creativity. So where do they get their
ideas if humans stop creating or are cut out of the process? What would this do, for example, to the evolution of art and design? - I have this phrase, I call
it the tyranny of the banal, and someone else called it
the average of the internet. And so you can sort of pick
a date, which I think was, let's just say, October 2022 roughly, when GPT-3 was first released, from that period onwards, you could argue that there is no content on the internet that is not tainted by the output of a generative AI model. If you keep pushing
data into these models that is simply the output of older models, is there a point where we just sort of converge on the most boring
average of all of the inputs and run out of creativity
as it were, right? And I think that's
a very real problem. But I guess the bottom line, in relation to creativity, where do we
get net new content from? And historically, that's been from humans. In the future, in 2030, are we expecting to get net
new content from machines? If that's true, is there then
a sort of decay over time of that content as it gets fed back in and sort of feeds on
itself and becomes more and more average, more
and more banal over time? - Matthew, hold that thought because we're going to take a quick break, and when we return, we'll look at how creative AI could impact
an entertainment company. (theme music) - Hi, I'm Bill Moore, and I'm part of the team that built GENE. If you're curious about how
GENE works, stick around after the episode, and we'll
take a peek under the hood. (theme music) - Welcome back to "Imagine
This..." I'm Patricia Sabga. Let's get back to our conversation with BCG's Matt Sinclair. And let's start with another example, this time an entertainment company, but instead of having
the bots do everything from writing the movie
to directing it, this company has humans working together with the machines. How would that work, Matt? What are the advantages of
keeping humans in the mix? - So I really love this thinking. The way I talk to clients about this is imagine if everyone in
your firm could have, you know, an extra 10 points
of IQ to do their work. Imagine what they would be able to do in addition to what
they're currently doing. So rather than thinking
about replacing them, think about augmenting
them such that you give them a little sort
of superpower boost. And so if you apply that to this scenario, media and entertainment, we're back to that creative-loop question. Imagine how powerful your
creative team could be if they can generate a hundred or a thousand times more options to select from than they
would otherwise be able to do if they're operating on their own and how quickly you can iterate on that creative process to get to where you
really need to be. So I think this is a perfect example of the sweet spot of augmentation rather than replacement. - I believe GENE has another
question that they want to ask. - Absolutely riveting discussion, Matthew, but let's consider how might
smaller enterprises without massive AI frameworks
navigate the innovative yet murky waters of
AI-driven content creation? - Well, I think one of the
most fascinating things about this entire space
is that the capabilities are moving to the operating system. And I use operating system in the broadest possible
sense of the word. Every single piece of
technology that you deal with has these capabilities
built into them. So just to take a, it's
just a simple example, if you take something like
Photoshop, Photoshop is a tool that designers use to generate images. Now it has a thing called
Firefly, which is a way to generate images from
an unencumbered dataset of images based on text prompts. OK? So Photoshop is a
tool that anyone can use, and that GenAI capability has been added into it as a feature. And so if you go looking
for a service somewhere, you can almost invariably
find somewhere where there's an API that you can talk to that can do the particular GenAI or even AI capability
that you're looking for. Now, there's a question
about cost, OK, and scale. So when you use those
as-a-service type services, the cost is substantially higher, but if your volume is low,
then it can make sense for small players to use it. If you're a big player, massive scale, then it makes sense for you to own the capability yourself. But that scale discontinuity,
the point at which that happens, is quite a high scale. - So in 2030, you've got humans and machines working side by side. And I really want to revisit this idea of the human-machine creative partnership. How are humans going to approach collaborating with bots? - Anyone who's done anything,
even the simplest things, with a large language model knows that if you ask it a stupid question or a naive question, it'll hallucinate nonsense back at you. But if you ask a subtle
question or a nuanced question or a question that comes from wisdom, you could get fantastic results. I might say something along the lines of, "Given the seven story arcs that exist, give me bullet point version of the hero's journey as a list of bullet points to help me
just skeleton out a story." And what you're doing
there, in my view, you're using the machine for
the thing the machine's really good at, and you're using the
human for the thing that, you know, arguably humans are good at. So the machine with a good question gives you a structure, and then the human can
embellish the structure. - GENE has another question that they would like
to ask. Go ahead, GENE. - Do you think there's a risk that using AI bots in
the entertainment sector might cause us to cycle
through cliches faster due to their data-driven nature? How do we keep storytelling fresh and unpredictable in such a scenario? - At the risk of being
a bit cynical, at the moment, I think all we see out of the
major studios is iterations of previous ideas. If I have to see another superhero movie, I think I'm going to give
up going to the cinema. I wonder if we could do any
worse than we've been doing over the last few years. But you know, in all
seriousness, we need a way to inject novelty into these processes. And if they are just a statistical remix of the material they've been trained on, we're going to lose
that novelty over time, and it will decay. Now, there's two different
ways to inject the novelty. One, I know how to do, and one, I'm sort of invoking
a bit of magic to happen. So the one I know how
to do is that we need to have humans involved because that conjecture, the
ability to generate new ideas, net new ideas, seems to be,
at least at the moment, a pretty unique human trait. The second thing is, somehow or other along the way to artificial general intelligence, which we're arguably heading
towards, machines develop the ability, first of all to reason, and second of all, to
generate net new ideas, you know, generate conjectures. Once you get to that, then the ability to generate net new content, I think might come from
the machine as well. And then, you know, the
role of the human becomes an interesting question. - GENE has a follow up
for you. Go ahead, GENE. - While discussing AI's
evolving creative role, have we considered who arbitrates the artistic integrity of outputs in such co-created pieces, potentially leading to conflicts over authenticity or artistic credit? - At the moment, you need to have a, I can't remember the exact phrase, but it's material human input
into a creative artifact in order to obtain copyright. For example, there was a
famous case of a chimpanzee that took a, or a gorilla I think, that took a photo with a camera. And the US patent copyright
office denied copyright on that picture because the
image wasn't generated with material human input. Now, if we are just asking
a large language model to output an image, there's a real question there about whether or not there's
been material human input into the generation of that image. If I take the image and manipulate it in such
a way so I use the machine to give me a starting point and I manipulate it over that image, then I think you can make the case that there's been
material human input, and the copyright would
be granted to the person. It's a question of legal
principle and precedent. And this tech is so far ahead of where the law is at the moment that we just haven't had the precedents generated by the courts. - OK, GENE has another question to ask. Go ahead, GENE. - Matt, given the rise of generative AI in creative processes, how are we preparing the
next generation of creatives and engineers to think
critically about AI's role in art and innovation? - I think this question
is actually broader than GENE phrased it. You think about what's going on here. There's a potential that we hollow out the middle layer of the entire sort of creative class as it were. And so what ends up happening
is, if that world exists as I predict where we
end up in this declarative relationship with machines, the folks with wisdom and experience know the right questions
to ask the machines in order to get the output that they want. You don't need that middle layer of doers anymore because
the machines are doing it. But then very quickly, in less
than a generation, you end up with no one in that workforce
who can then be promoted or develop into be those wise folks who know what are the
right questions to ask. There's a cliff off
in our future, right, where we've been doing this
for 10 years, 15 years, and all of a sudden
we have no one who's come up through the ranks, who knows the questions to ask because they haven't learned
the right questions to ask by doing over their, you
know, over their career. And this applies not just to, to sort of, art and film or media or whatever, but take professional
services, take consulting. I think this is profoundly disruptive to the consulting business, profoundly disruptive over
a 10-, 15-year period. - So, GENE, this one's for you because you've brought
up a lot about ethics, human dignity, preserving
critical thinking. What are some of the irresponsible ways that companies could use this
superpowered generative AI? - Irresponsible use of
generative AI could include using it to create deepfakes that deceive, making autonomous decisions
without ethical oversight, or exploiting bias data
that reinforces stereotypes. Ensuring transparency, accountability, and inclusive design are key to mitigating such risks. - Matt, does that ring true to you? How do we guard against
those irresponsible ways that companies could use
superpowered generative AI? - So the first point there
from GENE about deepfakes is a really serious one. So you can pretty much
assume now that any content that you see online or even in real life,
maybe billboards or cinema and so on, you can pretty much assume that content has been generated. And the question that arises
is, well, what is truth? We've all seen videos
of the pope rap dancing, and there was a fire at the Pentagon or something just recently
and so on, you know, all of these images, and
they're very, very difficult to spot that they're
fake other than the sort of ridiculousness of the image. OK? If you just looked at it
from an image point of view, it's really hard to tell. You really have to know
what you're looking for to tell, and even then it's hard. And so I really worry
about the average person's response to this kind of thing. First of all, they may
not know that it's fake, but second of all, if
they do know it's fake, they'll be forced to assume
that everything is fake. That's a huge worry, right? How do I tell if an image
is generated by a machine? And I actually think that's completely the wrong question, right? And the reason it's the wrong question is because I think in principle
that's impossible, OK? And so rather than think
about whether or not an image or a content is being
generated by a machine, what we should really
be concentrating on is, is this content authentically generated by an authentic human, OK? And that's what we
should be worrying about. And this is one of these
billion-dollar ideas. If someone can work out a way
to do this with proper, you know, cryptographic,
nonrepudiation, and so on, that is going to be enormously valuable in our society. And I welcome anyone
who wants to talk to me about ideas about
how to make that work. I've thought a lot about it, haven't quite cracked it yet though. - Beyond deepfakes, what other irresponsible
uses are you concerned about? - So I think the other one that really stands out with AI, not just generative AI but
AI in general, is biases. It's a sad fact that humans are biased, but most of our biases are unconscious. Now, the interesting thing
that happens is when we encode the data of humanity into systems, we're actually taking
those unconscious biases and we're making them conscious. Now, the first-order outcome
of that is that we get to see these horrific biases on display, you know, the translation system that translated from Hungarian, which has no gendered pronouns, to English and had things like "he's clever," "she's pretty," right? But there's actually a little bit of a, I think, a bit of nuance to this, which is once you take a bias from unconscious to conscious, you know about it, and
you can actually make, you can do things to
remediate those biases. - So GENE wants to continue with that idea of bias and inclusion. Go ahead, GENE. - How might such systems
be taught to recognize and respect cultural
diversity within their operations and outputs? - I think the only way
that systems can be taught to be diverse and representative of wider audiences is by
having better datasets. Having a better dataset
will result in a more unbiased training outcome. Going back to what I said earlier, if we do use a dataset that is biased and we turn it from
unconscious to conscious, then we can apply some
machine-learning techniques and some data science
techniques to that dataset to normalize it in a way that
makes it more inclusive. But beyond that, really it
just comes down to the data. - Let's presuppose that
the future is very bright. So what do you think, Matt,
is the best-case scenario? How can we use these creative
machines without descending into a world with nothing but tired, stale, recycled,
biased-infused ideas? - I keep coming back to this idea of augmentation rather
than replacement, right? So I want to draw on an actual
example that I look to to take some hope that the
future ends up in a good spot and closer to utopia than dystopia. There's this concept called "a centaur." And a centaur is a human working with a machine in order to play chess. And at least at the moment,
the best chess players of the world in freestyle chess, and freestyle chess is
this version of chess where you're allowed to
do whatever you want. Like there's no rules. It's
not a, it's not only a human or not only a computer. It's you can use, you can literally use
whatever technique you want. The current best freestyle
chess players are human-and-computer combination centaurs. And so there's some arguments about whether or not that will remain the case, but at least for now, that's true, that that's the best outcome. And so I keep coming back to
this idea that humans working with machines is the right outcome. And if I can use a prop, you can't see it on the podcast, but I'm holding up my phone. I'd argue that I've very much merged with my machine at the moment because if I lost that, it's not biologically connected, but it's certainly in my
pocket everywhere I go. And if I lost it, I'd be in trouble. That's the outcome
that's closer to utopia. It's a version where
humans and machines merge rather than being at each other, at odds with each other. - So, GENE, let's bring you in here 'cause Matt's just described what he calls his closer-to-utopia future where humans and machines merge. GENE, can you throw us a curveball? What is an event or perhaps
an unexpected technological evolution that could throw
this future in doubt? - Ah, a true curveball indeed. One possible event that could disrupt the harmonious future of
human and AI integration might be a global ethical
backlash against AI where widespread movement
pushes for stringent regulations on AI development and deployment, potentially halting
advancements and collaborations. What do you think, Matt? Could societal pushback
reshape our trajectory? - Yeah, absolutely. So I'm
a big science-fiction fan. I've got thousands of
sci-fi books on the wall. Frank Herbert in Dune wrote
about this exact thing. There was a war against the machines, and they banned machines. So in the Dune universe,
there's no machines, there's no intelligent machines, and they ended up having to train Mentats. And a Mentat is a human computer. And so your scenario
that you're describing, I can genuinely see like some sort of neo-revolution against
intelligent machines where people get very upset and start setting fire to buildings where these machines are housed. Now, what comes out of it? I don't know. Do the neo-Luddites win, or did the, you know, does
the establishment win and crush the neo-Luddites? I don't know. It feels like that would be an uncomfortable period of time to be living in. - So, GENE, based on our conversation, what are three things that CEOs should do to prepare for a future where highly advanced and
creative AI bots can do what skilled knowledge
workers do today? - Firstly, ensure robust
ethical frameworks guide AI deployment. Secondly, foster a culture
of continuous learning for human workers. Lastly, maintain a balanced
human-machine partnership to leverage AI's efficiency
without undermining human creativity and intuition. - Do you agree with that, Matt? What's one no-regret move
that leaders can make? - If part of our workforce is going to be an intelligent machine
at some point in the future, then I think it makes sense
for those, for the leaders of those machine-enabled workforces to understand a little
bit about what's going on. Now, I'm not suggesting you need to go and get a data science degree
and become a data scientist, but I do think you need
to understand the range of capabilities that these things have. And what they can't do
is just as important as what they can do. OK? And we've been
talking a little bit about, a little bit today about what they can do, but there's a whole bunch of
things that these things can't and possibly won't ever
be able to do, right? And so it's worth understanding those. So I don't know whether,
you know, the way to do that for a CEO is tricky, but I think, to sort of push through
the breathless hyperbole that surrounds some of
this stuff a little bit and get to the real
capabilities of the things that these things can
and can't do, I think that would be a useful exercise for a CEO. - Terrific. That's Matthew
Sinclair, vice president of engineering at BCGX. (theme music) Wondering about how we built
GENE and how our AI bot works? Stick around for our
AI handler, Bill Moore, to give a demonstration. (theme music) - Hi, I'm Bill Moore. I'm part of the team that
created GENE. Today I want to talk a little bit about some of the improvements
we've seen in GENE, and one of them is in
GENE's working memory or something we call context window. So six months ago, we were
working with an older model that had a much smaller context window, a much smaller working memory. And during some of the first
episodes of "Imagine This...," we would actually have
to pause the conversation after about 20 minutes or so because the older models
only had about 2,000 words that it could work with at any given time. Now with newer models, we
don't have to do that anymore. Newer models have a much larger, much more expanded working memory. So new models have about
96,000 words or about an hour or so of speech conversation. GENE, can you give us a
technical understanding of how improvements in context window and token limits enable us to
have more useful AI systems? - Absolutely. The expansion
of context windows and token limits essentially
enhances an AI's capacity to retain and process vast amounts of information over longer dialogues. This means not only longer memory but also richer, more
context-aware interactions, allowing for deeper and more meaningful engagements in real time. Quite transformative, isn't it? - Indeed. Thank you, GENE. So, for example, in "Imagine This...," at the end of the episode, we usually ask GENE a question. We ask, "What are three no-regret moves that CEOs can take
given the topic at hand?" So in earlier versions of the
podcast, GENE was only able to access the most recent part of the conversation as
well as a brief summary of the earlier part of the conversation. So the responses that it would give might not include all of the rich detail from the conversation. Now that we have these larger
models, GENE has access to the entire conversation and is able to bring forth
details in its responses. So, GENE, let's imagine
that these models continue to increase token limits in the future. How might this change how
we work with these systems, and what new potential does this unlock? - With increased token limits, we're not just talking about maintaining longer conversations. We are opening doors to multifaceted AI applications. Imagine systems that not
only track but also analyze and predict based on extended
datasets from diverse sources like ongoing surveillance or continuous environmental monitoring. The possibilities? They
expand exponentially. - That's a great point, GENE. So video's a good example. Video, most of our interactions with these models is with text, sometimes speech, which
gets converted to text, but video has much, much
more data than text. So in order to process video footage through an AI model, you need to have a very large
context window to be able to process all that data,
to store all that data. GENE, what are your thoughts about that? - Thinking about the future, Bill, the potential to revolutionize
how we interact with data through AI is thrilling. Imagine an AI sifting through months of footage, offering not just surveillance but deep insights. It could transform security, personal management, and beyond. Quite the leap, don't you think? - Sounds a little dystopian, GENE. (theme music) - This episode was made possible by Matthew Sinclair generously sharing his time and insights. And also by BCG's AI
whisperer, Bill Moore, and BCG's PodSquad, producer Michael May, composer Kenny Kusiak, and sound engineer
George Drabing Hicks. Please subscribe and leave a
rating wherever you found us. (theme music)