[MUSIC PLAYING] ADI IGNATIUS: Hi, and welcome
to Harvard Business Review's The New World of Work. I'm Adi Ignatius,
Editor In Chief of HBR. And each week on this show,
I interview a CEO, a thought leader, or somebody
else interesting who can inspire us and
educate us on the changing dynamics of the workplace. Whether you're navigating
the complexities of a large corporation or the
challenges of a-- excuse me-- of a small startup,
whether you're based in the US or
anywhere else in the world, you face your own
unique set of issues. So the aim of this podcast is
to inspire thought and provide insights as you seek to
bolster your business and pave your own path
toward career success. So on today's episode, we have
a great guest, Karim Lakhani of Harvard Business School. I'm going to come back with
a proper introduction in just a moment. But first, let's hear
from our good friends at KPMG, who are sponsoring
this season of The New World of Work. ANNOUNCER: At KPMG,
it's our people who make the difference for
our clients, talented teams leveraging the right
technology to uncover insights that illuminate opportunity. Ready to make the
difference together? ADI IGNATIUS: All right,
just a couple more things before we start. If you're an HBR subscriber
and you're watching this, you can hbr.org/newsletters
to sign up for The New World of Work newsletter, where I try
to offer an inside look at each of these interviews and talk
about some of the ideas that come out of them. And if you like content
like this, please subscribe to our
magazine and website. The address is
hbr.org/subscriptions. And if you like hearing
smart people talk about some of these same issues, be sure to
check out our flagship podcast, IdeaCast, available wherever
you get your podcasts. And lastly, remember, you
can watch previous episodes of this show on YouTube or right
here on LinkedIn and Facebook. So let's get on with it. My guest this week, as I
said, is Karim Lakhani, a professor at Harvard
Business School who specializes in workplace
technology and particularly AI. He's done pioneering
work in identifying how digital transformation has
remade the world of business. And he's the co-author of the
2020 book, which HBR published, Competing In the Age of AI. So Karim, welcome to the show. KARIM R. LAKHANI: So glad to
be here with you today, Adi. Thank you for the invitation. ADI IGNATIUS: So there's
a lot to talk about, and I definitely want to talk
a lot about generative AI. But maybe we'll--
once we get to it, I don't think we'll get off. KARIM R. LAKHANI: No, exactly. ADI IGNATIUS: So
let's start elsewhere. But so you co-wrote a piece
for us a few years ago and it's reflected in
your book, where you say, machine learning has
basically changed the very rules of business. So that's a big statement. Tell us a little bit about
what you mean by that. KARIM R. LAKHANI: Yeah. So look, it's been a
real pleasure working with HBR and your
editors for the last 15 years of my academic career. And the book really
was a partnership between Marco Iansiti and Amy
Bernstein, one of the editors at HBR, as we created this book. And what Marco and I noticed
in about a decade's worth of research and spending time
with companies, both writing cases, as advisors,
as consultants, and so forth, was that the
nature of the corporation, which really was established--
the modern American corporation, which
became the blueprint for the modern international
corporation established in the 1920s and '30s, was
changing foundationally because of technologies like
AI, like machine learning. And what we observed was
that the entire business architecture in many of these
AI first companies at the time, in terms of business model,
how you create value, how you capture value, and
your operating model, how you deliver value,
how you achieve scope, the number of
customers you serve, the number of products you have,
scale, the number of customers you serve, and learning,
these fundamental parts of a business
architecture were being rewired because of
machine learning and AI and digital technologies. And so if you just
sort of reflect for a bit on your
experience using Google, for example, much of your Google
experience is fully automated, from the ads you
see to the search you do to, if
you're using Gmail, how you interact with them. And so it's not people
that do those activities. It's the algorithms
that make that happen. Similarly, your
experiences with all the large e-commerce platforms,
like Amazon or Alibaba, let's say, in China,
if you imagine what happens at Netflix,
for example, again, all these examples,
we've been using. But these companies work in
a fundamentally different way than a company like
General Electric, where I grew up as a right out
of college in my first job were set up. And these companies,
the machines and the algorithms at the
center, the work is automated. The humans are actually
designing the algorithms and testing them
and checking them, making sure they're
working within bounds, but the actual
transactions and activities are being mediated
through the machines. ADI IGNATIUS: So
I'm guessing that-- first of all, if
you're watching this, if you have your own
questions for Karim about digital transformation,
about generative AI, which we'll be talking
about in a moment, please put them into
the chat and I'll try to get to audience
questions later. So that was helpful,
Karim, but I would guess some of our
viewers are listening to you and thinking, yeah,
that's what we've done. And others are thinking, I'm
not sure we're on that journey or we're far enough
along on that journey. So my question is, what's
your advice to people who are listening to this
who are like, yeah, I get it. I get that there's value here. I'm not quite sure if my
company is right for this or I'm not quite sure
what the next steps are. KARIM R. LAKHANI:
Yeah, absolutely. Look, first I would say is that
I think most companies will not have a choice but to
adopt AI and to adopt digital at the core functions. In many ways, your
personal lives as mediated through
your transactions through your smartphone,
through these devices, and how you interact with
consumer technology products and so on and so forth, you're
already living in an AI age. And the thing I would say, Adi,
which is so interesting to me, is-- and I learned this
from some conversations I had with folks in India. They said, we have
some folks who get mad when your Uber car
or your Ola car or your DiDi car or your GrabCar
doesn't show up in 3 minutes. You want this magical
taxi experience. You go on your app, you press
the thing, boom, it shows up. And if it's going to be 5 or 7
minutes, you kind of get mad. And I'm reflecting on when I
first moved to Boston in 1997, and it would take me a week
to book a taxi in Boston. And so now we get mad. And similarly, if
there's a transaction dispute on Amazon or on Uber,
automatically solved, done. But the same people,
the same executives, in their own companies
are completely satisfied if a customer
service interaction can take two weeks, if
onboarding a new vendor takes six months. And so we're living in
this disconnected world where most people,
most consumers are living in this AI first
world, in their experiences, with many of these platforms. And then they
encounter our companies and our organizations, and
they're like, what is this. And so my sense is that
this is inevitable. This transition is
really inevitable. And for the folks
that are behind, the good news is that the
cost to make the transition keeps getting lower and lower. The playbook for this
is now well known. And finally, the
real challenge is not a technological challenge. I would say, that's
a 30% challenge. The real challenge is 70%, which
is an organizational challenge. My great colleague,
Tsedal Neeley, talks about the digital mindset. Every executive,
every worker needs to have a digital
mindset, which means understanding how these
technologies work, but also understanding
the deployment of them and then the change
processes you need to do in terms
of your organization to make use of them. ADI IGNATIUS: It's
really interesting because we think about this. We're a relatively
small publisher compared to some of
the giants out there. But when people come
to our site and they're searching for articles
by Karim Lakhani, they're used to a Google
sort of search experience. KARIM R. LAKHANI: Exactly. ADI IGNATIUS: When they want
to buy a product from us, they're used to an Amazon. And anything short of that,
it's like your Uber example. There's a frustration
and expectation. So we have to find ways to lift
our game without the resources, whether it's through
partnership or other things, because it's table stakes. People just expect
the best experience in every experience they have. KARIM R. LAKHANI: 100%. If I look at my
teenage daughter, she has no patience for
[INAUDIBLE] companies. And she just gets mad and
just is, like, what is this. And so absolutely. ADI IGNATIUS: So anyway, so
the next big wave-- again, we're going to talk about
it is, I keep teasing it, but it's generative AI. But that won't be the last wave. And quantum will
hit us at some point and things we don't even can't
even anticipate will hit us. So how do you prepare for that? How do you create a
kind of, I don't know, culture or mindset
or organization that knows that there will be
unexpected ways of technology. They'll come we'll have
to figure out if they're relevant to us or not. And if they are, we
need to adapt quickly. Is there a general way to
think about that [INAUDIBLE]?? KARIM R. LAKHANI: Yeah, yeah. Yeah. Thank you for that question. And in fact, I've been
pondering this quite a bit. I think there are two
imperatives for most executives, for most
managers, for most leaders. One is a learning imperative. This is, again, Tsedal's work
on digital mindset, my work. There's lots of
learning you need to do and the learning has
to be continuous. And the idea is that not that we
want you to become AI engineers or data scientists or get a PhD
from Stanford or Harvard or MIT or Tsinghua University or Oxford
University, or from the IITs in machine learning. But as executives, this
is now table stakes. And the way I think about
this for at the MBA program, for us, people come to
the Harvard MBA program. And we have-- first year
is a required curriculum. There's 10 courses, and
one of them is accounting. Now, I can tell you, accounting
is a very important profession. But most MBAs that join HBS
don't want to be accountants. But they need to learn
accounting because that's the language of business. That's the ways
in which you think about how value
is kept track of, how expenses are being tracked
of, and so on and so forth, super important. And you don't take
the accounting course to become an
accountant, but you need to understand accounting so you
can be a good business person. Same thing now with digital
technologies and machine learning. You need to understand
the machine learning stuff and the AI stuff,
not because you're going to become an engineer
or an AI scientist, but because that is now
going to be a critical table stakes for you to understand
how business works. So there's a learning
imperative and I don't think we can take away
the learning imperative anymore. Now, look, I'm
self-centered about this. I'm self-interested. I'm in the learning profession. That's what I do, so
I want to caveat that. But I want to just insist that. I think the learning
journey does not stop. And you have to invest in
your own personal learning, and I think companies
need to invest in the learning for their
own employees as well. It's a two-phased conversation. Companies have to embrace
this and so do individuals. But the second
bit, I think, Adi, is equally important, which I
think is completely underrated, which is change and
change management. And change becomes a skill
for managers and for leaders and for executives. How you change, how
you continue to change, how you build the
DNA for changing becomes very important. At a time, by chance, I
was in Asia a month ago and had a chance to spend
some time with Mickey Mikitani at Rakuten. And one of the
things that's amazing is what he has thought
about as change as a core competency for all
workers and for all employees. Right now, most change programs
are viewed with skepticism, as flavor of the month, blah,
blah, blah, and people resist. People resist. I think the best companies will
be the ones that can understand how change becomes a skill. And if you think about change
as a skill, what does that mean? Skills require
acquisition of the skills. You've got to
invest in learning. What does it mean to change? It requires practice. You've got to keep
changing as well. And it requires adjustment. Once you've learned
how to change, how do you change that-- how do
you project that to everybody else? So those elements,
I think, will become a key part of the
ways in which leaders need to adapt to this world. ADI IGNATIUS: Yeah. I think that's great. It could be generational and
it should not be generational. I always think that
when you come of age, when you join the
workforce, there's a certain suite of
technology that it's just you grew up with it and
you're comfortable with it and you are part of
figuring out how to use it. And then subsequent ways, a
lot of us hit a point where, yeah, that one seems stupid. And for my father, who's still
alive at 102, that was email. So you can't-- you've
got to keep trying. You got to keep experimenting. You have to keep current. KARIM R. LAKHANI: Adi,
such a good point. And I have to tell you, around
COVID I had this experience with my mother, who lives
in Toronto, and my in-laws, who also live in Toronto, when
the things sort of eased up a bit and in November of 2022
when Thanksgiving was on we were able to fly them back to
have a reunion with them and-- in 2020, sorry, November
2020, when things eased up a bit with the vaccines
and so on and so forth, we were able to bring them over. And [INAUDIBLE] dates,
but around that time, one Thanksgiving in COVID. And if you recall, traveling,
even just crossing the border from Canada to the
US was very difficult because you had QR
codes and apps galore. Canada needed all
this stuff to exit, US needed these stuff to
enter, and the same thing for entry as well. And I looked at, sadly,
how helpless my in-laws my parents were with
these technologies. They were just lost. And my wife and my daughter and
I had to spend a ton of time with them, holding their hands
to go through these things. Now, of course,
the UX was terrible and all that kind of stuff. But we figured out how to do it
ourselves, but they were stuck. They couldn't adjust. And then as I reflected
on that experience, I said, oh, this is what's
happening to most executives. This is what's happening
to most companies. It's like they're the senior
citizens, the elderly who have resisted the
technology, have not really embraced it, and now have no
choice but to deal with it and are frozen and
need a ton of help. And that's what the thing
that we have to get over as we think about this. ADI IGNATIUS:
Yeah, that's great. I think we can
throw out our QCADs. But otherwise-- KARIM R. LAKHANI: Yes. I remember, and I
have a QCAD still. Old jokes for those
in publishing. ADI IGNATIUS: Yeah, I know. That's definitely--
this dates us. So let's talk about
generative AI. So people who, if that sounds
jargon-- oh, by the way, you referred to
earlier to Tsedal, and I just want to fill out
of that's Tsedal Neeley, who is another-- KARIM R. LAKHANI: Professor
Tsedal Neeley, Senior Associate Dean of the Harvard Business
School, my good friend, and author of another great
book called The Digital Mindset. ADI IGNATIUS: Absolutely. So all right, generative AI, for
people who aren't conversant, and probably most of you
are now, but this is large language model learning,
predictive, ChatGPT, Bard, Bing, all these things. And I like to say that
there were three waves. The first was we played
with this technology when it came out in whatever
it was, November of last year. And we tried to break it. ChatGPT, are you
in love with me? And now we're trying to
figure out how to use it, and the use cases are happening. So first question for you,
where are you in the hype cycle? Because technologies
come and go. And there was a
sense with this one that everyone--
everyone-- basically said, OK, this is different. This is important. This is transformational
in ways that few technological
innovations are. But where are you on that? KARIM R. LAKHANI: I'm
on the, holy crap, this is transformational. Yeah. So look, the way
I think about this is it's actually worth to pause
and look at history a bit. And since I studied
technology in business, something transformational
happened 30 years ago, approximately, as well,
which was the browser. The browser got invented. And if you think
about the browser, there was 30 years
of the internet. The browser gets invented,
and people were like, oh, my goodness, look at this. And I remember-- I can
still see as clear as day when I first
encountered the browser. And I was working
at General Electric. I was at a conference
for radiology, and one of my
clients, a radiologist at Saint Paul's
Hospital in Vancouver, showed me the browser
and the thing he showed me was the Oxford coffee pot. I'm like, interesting. All of a sudden, the
Oxford coffee pot has global distribution. Anybody that has a web browser
and an internet connection can use it. So there's 30 years of
internet in the basement in the bowels of companies. We didn't understand it. We saw it was coming,
it was coming. It was USENET, there was
Gopher, there was Telnet, there was FTP, all
these kinds of things. The browser showed what
the world would look like. And the initial applications
were cute applications. And people were,
like, this is nothing. This is whatever. But fundamentally,
from an economics point of view, what the
browser did is that it lowered the cost of information
transmission dramatically. And then the last
30 years, we've been living through the
buildout of the internet and waves and waves of
the internet changing more and more and more
industries over and over again. We've all living through that. Just imagine this right now. We are broadcasting
live to, I don't know, thousands of people,
and more people will be looking at this
broadcast at relatively zero marginal cost to us to do this. It seems unbelievable
compared to 1993, where you needed a massive
TV studio, massive broadcast studio, satellite
dishes to be able to do what we're doing right now. And so the cost of information
transmission went to zero. And then new companies formed,
Google, Amazon, Facebook, you name it. E-commerce got invented
and so on and so forth. And so that is the world
that we are coming out of. The internet era is
we're coming out of. And the same thing has
happened with generative AI. There's been 20 years
of AI being deployed at scale inside of many tech
companies, the ones we use in our examples in our book. And that was in the basement, so
Netflix movie recommendations, your Google search results,
your Amazon recommendations, your Spotify music
results, your car access. All of that was being-- your Waze access,
your directions, all that was being
empowered by AI tools, even your spam killers. Remember how bad spam
used to be for a while, and then overnight it went away? Because people deployed machine
learning systems to those things, early machine learning
systems to those things. Now-- ADI IGNATIUS: Go ahead. KARIM R. LAKHANI: Now, how do
we think about generative AI? So my view is, generative
AI is a drop in the power in the cost of cognition,
in how we think. So if the internet was one of
cost of information dropping to zero, my sense is that
the cost of cognition, how we think, who we think
with, is dropping to zero or lowering
significantly with this. And that has significant
ramifications for this. And I have to tell you, I
had to do a major pivot even in my research side on
what to do with this. I was doing a lot of
stuff around AI adoption and so on and so forth,
a lot of research, a lot of nerdy research that
only, like, three people ever read. But we have gone-- my whole Institute and my
labs have gone big time into figuring out what this
means for knowledge workers, for managers with generative AI. ADI IGNATIUS: Yeah,
well, let's talk about that because I think,
again, for our viewers, there are probably some of you
are well conversant or using it, whether it's for fun
or to try to figure out work applications. There are products
available that are using this that
rolled out pretty quickly. How to think-- is there a way
to think about this generically? What is, for a generic company,
if there is such a thing, how should they think
about using generative AI? I mean, we've seen the
amazing things that can do. We've seen the hallucinations
that they can create, that confidently provide errors. So how does a
business think about, is there a generative
AI application that is significant for my company
and how do I figure that out? KARIM R. LAKHANI: Yeah. So first of all, I think we're
at the super early stages of this hype, of this cycle. And if you think about
it, the first web browser was Mosaic, and then Netscape
and then Explorer and Mozilla and so on and so
forth came about. So I think we
should just think-- and then all the
applications on top. So I think we are
at the early stages. The rate of innovation and
the rate of improvement is increasing rapidly,
and it keeps increasing. And the rate of
application development is also increasing rapidly. So the first thing
I would say is that the places where you can
apply it is in many ways like, well, where do you
apply thinking? Well, where else
could you apply this, with all the caveats about
hallucination and bias and so forth. So the first thing
I would say is that I think, if you
step back and say, what should leaders do,
what should managers do, what should executives
do around this thing, one is to start
thinking about and start practice in their own sandboxes
what the use cases may be. We're seeing tremendous
use cases, for example, just in sort of content
generation, like, our work. Us as knowledge producers,
that's changing rapidly. Now I use ChatGPT as an
amazing research associate, thought partner, copy
editor, idea generator. I can tell you one thing. I was in Asia. My wife was with
me on my trip and I wanted to actually have
some time for break as well. So I went to ChatGPT
and I said, this is me. This is my wife. Here's the kind of
vacations we like. Can you please give us
ideas of a place that would be about three hours from
Singapore that we could go to? And I prefer beach,
blah, blah, blah, blah. Boom. In microseconds, I got
many recommendations. And then through a
conversational setup, I found the place that
we wanted to go to. It was a hidden place in the
South China Sea off Indonesia, and it was incredible. It was incredible. And that I would
not have discovered, even with my travel agent. So just even in
that activity, just imagine what we can now
start to do with this. And so what I would say is
that the thing that managers and leaders need to do is,
step 1, start using it. I think the bans on
ChatGPT and on these things are misguided in many companies. It's already on my phone. It's already people--
there's 100 million users. It's already there. So I think executives
and IT departments and legal departments
are fooling themselves. They don't think their
workers are already not using these tools. And instead of pushing
against it and saying, no, you need to embrace
it and run boot camps, run use case
analysis, figure out where it's hallucinating
in your use cases and figure out
where it's actually going to be very helpful. And what I say to people, for
managers, leaders, and workers, is AI is not going to replace
humans, but humans with AI are going to replace
humans without AI. And this is definitely the
case for generative AI. And so the first step is
begin, start experimentation, create the sandboxes,
run internal boot camps. And don't just run boot
camps for technology workers. Run boot camps for everybody. Give them access to tools. Figure out what use cases
they develop, and then use that as a basis
to rank and stack them and put them into play. ADI IGNATIUS: Yeah,
I agree with that. We have to think about
that as a publisher. There are some
publishers who say, we will not take articles,
papers where generative AI has been involved. That, similarly to me,
doesn't make sense. It's like saying, don't use-- KARIM R. LAKHANI:
How will they know? How will they know? ADI IGNATIUS: How
will they know? And then it would be like
saying, don't use Google. It's a tool. What we're saying, though,
is that the responsibility more than ever is on the person
with the byline on this piece. That was true-- you didn't
want to just use Google search results or just use
Wikipedia results. You need to verify and
do a little bit more than that, now more than ever
because you may be relying on-- KARIM R. LAKHANI:
No, absolutely. And as scholars,
we publish, again, these nerdy papers that
very few people read, and we're in the same
crisis because, well, if I use an RA to
come up with ideas, do I have to acknowledge the RA? Is the RA the co-author? If I use a copy
editor, I typically don't acknowledge a copy
editor for my article, but they're super helpful. Should that be-- attribution
becomes interesting. There's so many important
questions at play, just as writers and producers. But my advice to everybody is-- and the best place to
learn, Adi, is YouTube. YouTube has, oh my God, so many
tutorials and so many domains. And very soon, you'll
be down the rabbit hole. Is learn and adopt and
practice and practice and see. ADI IGNATIUS: Yeah. And I think I think there's
a trap that sometimes people feel like, if they don't
jump on the wave immediately, somehow it's too late. KARIM R. LAKHANI:
No, no, gosh, no. ADI IGNATIUS: And I think it's
really important-- it's early. KARIM R. LAKHANI: Oh my gosh. ADI IGNATIUS: Karim, if you're
right that this is truly transformative, it's early. If you feel like,
wow, everybody's moving faster than
I am, catch up, whether it's YouTube or
just doing some reading and figure out how to applies. So let me go to some
questions from the audience because there are
a lot coming in. And so this is from [? Vena ?]
from somewhere in the US. And it's really
[? Vena ?] is saying that AI, machine learning,
this is somebody's code. It comes with biases and
assumptions built in. It's a topic we
think about a lot. But what's your view? How can the industry
ensure that there isn't a monopoly on how we
think and how we're biased and the assumptions
that we make? KARIM R. LAKHANI: Yeah. So look, I as an individual, I'm
part of Mozilla, mozilla.com. We made the Firefox browser
owned by an open source foundation, Mozilla Foundation. If you love Fire--
if you haven't used Firefox in a while,
go back and use it again. But we just set up
Mozilla.ai, and the idea is that we want to create open
source large language models as well and create the
tooling that enables many people around the world
to also have large language models suited for them. And our view is that we can
build tools to, A, detect bias and to fix bias and to
fix all the craziness that these large
language models can do. So I'm actively working and
trying to create and support organizations that do that. I would say that the first
thing that we need to think, though, is to step back and
say, the world is biased. We had bias before there was AI. AI is just amplifying it
and making it apparent. The world is biased. You look at the unbelievably
bad treatment African-Americans receive in our health care
system and the financial system and so on and so
forth in the US, or if you go to
some other country, there's always been
discrimination without AI. AI is helping to amplify it. And so the response-- the
ethical responsibility for us as leaders has to be
that we have to understand what is biased today in our systems. And then let's translate to AI. Well, certainly, how
representative is your data? How representative
is your training? How representative
is your labeling? Those are essential,
essential questions that need to be
now part of, again, the executive conversation. That's why the learning
mandate doesn't stop, because you have to understand
how these machine learning systems are built for you to
understand what the biases are and how you might get sued or
be put in jail, for God sakes, if you don't follow
through on these things. And so this is super important. But I want us to
also be aware that we need to think counterfactually. There's always this
bias in the world. And now let's imagine
a world with AI. And is it going to take the
biased world and amplify it? Or can we correct for it? Can we recognize it for it? So that's going to
be very important. ADI IGNATIUS: Yeah. And it depends,
maybe, on ultimately if you're a techno optimist
or a techno pessimist. KARIM R. LAKHANI: Yes. I tend to be on
the optimist side. ADI IGNATIUS: Yeah. So there are a million questions
coming in on this topic, and we're not going to be able
to get to them all or even close to all. But this is sort of a different
question that's come in, but I think it's
kind of interesting. This is from Janelle
in Washington, DC. So when we're talking
about dealing with waves of technology and
changing and adapting, so the question
from Janelle is-- we've been talking about
how your employees can learn and adapt. But how do you help
the customer learn? Because sometimes
that's the learning arc that you need to accelerate
for your business to go to where it wants [INAUDIBLE]. Do you have thoughts
on how to [INAUDIBLE]?? KARIM R. LAKHANI: That
is a great question. I think customers tend
to be ahead, I think. I think what happens,
what's interesting-- I was in sales and marketing for
four years at General Electric. And what would happen, what
would be so interesting is, because my customers
knew what I had and what I didn't have
and what we're good at and what we're not
good at, they wouldn't talk to me about things
that we were not good at or we weren't exploring. So I never got that message
until much, much later. I discovered, like, oh,
you're interested in this? Oh, we've got some
nascent product, whatever. But they said, no, we knew GE
was not going to be good here, so we didn't talk to you. So I think you'd be surprised,
especially today's customers because, again, as I mentioned,
all of them are living, including yourselves,
Janelle, are living in a digital age
with our smartphones and our capabilities
and so forth. So you'd be surprised
at how fast they adapt. And in many situations,
with other companies they're already further
down the pike than with you. And so we get the wrong
signals from our sales teams, from our marketing teams,
even from our focus groups, because we actually don't
observe customers in situ and see what's going on. And again, you think about
the median user of Facebook, I think they're, like, 50
or something right now. So I think adoption is no
longer as big a deal that I think we think it is. ADI IGNATIUS: My generation
ruined Myspace and now we're going to ruin [INAUDIBLE]. KARIM R. LAKHANI: I know. Look at that. Now we're aging
out Facebook, too. ADI IGNATIUS: We'll
get TikTok next. So last question. And again, I wish
we had more time. But maybe the new
wave of how people are thinking about
generative AI is, and at first with
an explanation, it's really like that technology
that predicts the next word-- KARIM R. LAKHANI: Yes, totally. ADI IGNATIUS: --sort
of on steroids. But then it sort of evolved
into this almost feels like a sentient-- the machine is developing this
kind of emotional intelligence or that we may be
on the path to that. Is that-- what's your view? Is that a pure illusion or are
we heading toward something that will at least feel
like an intelligence and an emotional
intelligence [INAUDIBLE]?? KARIM R. LAKHANI:
What a good question. The first thing the first
thing I always say is, be kind to your robots. So always say
please and thank you when you're using
ChatGPT or Bard. I do that as a principle. I tell everybody, be
kind to your robots, because if the sentience
moment shows up, all the data will be there. All the history of our
records with these systems will be there. And you don't want them to get
pissed off because, hey, Karim was a bad actor for us. So always be-- like,
I'm an ardent atheist, but I still say inshallah. ADI IGNATIUS: Just in case. KARIM R. LAKHANI: Who knows? Just in case. Hedge your bets. Hedge your bets. So like I say, inshallah,
we should always say please and thank
you to your robots. That's the first thing. The second thing is, right
now the human-like responses are a statistical illusion. They absolutely are. They've just been
well trained by humans to respond like humans. And they've used all of our
texts and all of our videos to be human like in many ways. But in the end, it's a
statistical or computational illusion. But I can tell you, I got
a little bit of a wake up call on this. I felt like this stuff,
like, the strong AI stuff that has been talked
about this-- all this stuff is what we call weak AI. The strong AI stuff is decades
away, multi decades away. But in conversations
with leaders at Harvard at the
Kempner Institute, which is the new Institute
for Natural Intelligence and Artificial Intelligence, so
marrying biology with AI and AI with biology, so two amazing
scholars, amazing world leaders both in neuroscience
and machine learning, said, hey guys-- this is pre-ChatGPT
as well, by the way. Said, hey, guys, what
do you guys think? How far away is this
strong AI world? They said, 20 years. And I was like, whoa. I'm not ready for that. But the world
experts, people that know better than I do on this
thing, are saying 20 years. It might even be faster. The thing that's
interesting to me, Adi, is we may not even
know when it has sentience, because it's like we
assume human-like forms on alien intelligence,
on intelligence. But if you read a lot of
science fiction, like I do, maybe alien life is going to
be carbon-based, but maybe not. Maybe they'll have a
different metabolism, maybe different neural systems. And you need to
be ready for that. So we may not even know
it, that's the thing. And I think-- there was a paper
that was at Harvard recently called "First Contact." Some machine learning expert
at Microsoft wrote this paper. And I was like, wow,
that's pretty radical. ADI IGNATIUS: Yeah. This is fabulous. We're going to have to
get you back on the show because there's a lot
more to talk about. KARIM R. LAKHANI:
Yeah [INAUDIBLE].. ADI IGNATIUS: We didn't even get
into the Congressional hearings on aliens. KARIM R. LAKHANI: Oh, gosh yes. ADI IGNATIUS: We're just a
half step away from that. So this has been Karim Lakhani,
Harvard Business School Professor. Karim, thank you very much
for being on the show. KARIM R. LAKHANI: Great
to be here with you, Adi. ADI IGNATIUS: OK. See you soon. All right. So I want to thank you
all for joining us today. Just a reminder, you can
watch previous episodes of the show on YouTube or right
here on LinkedIn and Facebook. Now, be sure to join us next
week on Wednesday, August 9 at 12:00 noon Eastern
time, when my guest will be Andrew Liveris he's the
former CEO of Dow Chemical, who in that job got
credit for pushing an ambitious sustainability
agenda at the company. He's also the author
of a new book, Leading Through
Disruption-- a Changemakers Guide to 21st
Century Leadership, and we'll be talking about
how inspired leaders can best adapt to the challenges that
keep getting thrown their way. Again, if you're an HBR
subscriber watching this, you can head to
hbr.org/newsletters to sign up for The New World
of Work newsletter, where I offer an inside look at
each of these interviews each week and talk about some of the
ideas that came out of them. And again, if you like
content like this, Why not subscribe to our
magazine and website. The address is
hbr.org/subscriptions. And finally, we want
to thank our friends at KPMG, who are our sponsors,
for this season of The New World of Work. And I want to thank all of
you for tuning in today. I'm Adi Ignatius, and this
is The New World of Work. [MUSIC PLAYING]